2026-02-05 00:00:06.954420 | Job console starting 2026-02-05 00:00:06.972652 | Updating git repos 2026-02-05 00:00:07.087256 | Cloning repos into workspace 2026-02-05 00:00:07.414670 | Restoring repo states 2026-02-05 00:00:07.443854 | Merging changes 2026-02-05 00:00:07.443875 | Checking out repos 2026-02-05 00:00:08.013474 | Preparing playbooks 2026-02-05 00:00:09.052028 | Running Ansible setup 2026-02-05 00:00:17.072610 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-05 00:00:19.022576 | 2026-02-05 00:00:19.022708 | PLAY [Base pre] 2026-02-05 00:00:19.038403 | 2026-02-05 00:00:19.038523 | TASK [Setup log path fact] 2026-02-05 00:00:19.058052 | orchestrator | ok 2026-02-05 00:00:19.088236 | 2026-02-05 00:00:19.088375 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-05 00:00:19.181481 | orchestrator | ok 2026-02-05 00:00:19.193969 | 2026-02-05 00:00:19.194101 | TASK [emit-job-header : Print job information] 2026-02-05 00:00:19.316031 | # Job Information 2026-02-05 00:00:19.316280 | Ansible Version: 2.16.14 2026-02-05 00:00:19.316320 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-02-05 00:00:19.316361 | Pipeline: periodic-midnight 2026-02-05 00:00:19.316390 | Executor: 521e9411259a 2026-02-05 00:00:19.316411 | Triggered by: https://github.com/osism/testbed 2026-02-05 00:00:19.316433 | Event ID: d2fbb7a4c0254005bfb8ea044578dfa6 2026-02-05 00:00:19.326082 | 2026-02-05 00:00:19.326191 | LOOP [emit-job-header : Print node information] 2026-02-05 00:00:19.560893 | orchestrator | ok: 2026-02-05 00:00:19.561068 | orchestrator | # Node Information 2026-02-05 00:00:19.561104 | orchestrator | Inventory Hostname: orchestrator 2026-02-05 00:00:19.561130 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-05 00:00:19.561153 | orchestrator | Username: zuul-testbed02 2026-02-05 00:00:19.561174 | orchestrator | Distro: Debian 12.13 2026-02-05 00:00:19.561197 | orchestrator | Provider: static-testbed 2026-02-05 00:00:19.561218 | orchestrator | Region: 2026-02-05 00:00:19.561239 | orchestrator | Label: testbed-orchestrator 2026-02-05 00:00:19.561260 | orchestrator | Product Name: OpenStack Nova 2026-02-05 00:00:19.561279 | orchestrator | Interface IP: 81.163.193.140 2026-02-05 00:00:19.579463 | 2026-02-05 00:00:19.579563 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-05 00:00:20.297387 | orchestrator -> localhost | changed 2026-02-05 00:00:20.305015 | 2026-02-05 00:00:20.305115 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-05 00:00:24.366401 | orchestrator -> localhost | changed 2026-02-05 00:00:24.381591 | 2026-02-05 00:00:24.381698 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-05 00:00:25.510843 | orchestrator -> localhost | ok 2026-02-05 00:00:25.517295 | 2026-02-05 00:00:25.517392 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-05 00:00:25.547317 | orchestrator | ok 2026-02-05 00:00:25.604630 | orchestrator | included: /var/lib/zuul/builds/761bba5ad18141e6a739f58740b92162/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-05 00:00:25.652894 | 2026-02-05 00:00:25.653025 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-05 00:00:29.158048 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-05 00:00:29.158220 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/761bba5ad18141e6a739f58740b92162/work/761bba5ad18141e6a739f58740b92162_id_rsa 2026-02-05 00:00:29.158252 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/761bba5ad18141e6a739f58740b92162/work/761bba5ad18141e6a739f58740b92162_id_rsa.pub 2026-02-05 00:00:29.158274 | orchestrator -> localhost | The key fingerprint is: 2026-02-05 00:00:29.158294 | orchestrator -> localhost | SHA256:5Jq02PcDj9Rqmil/IXeeowicV3Y6quoRTHwM9VPYy9U zuul-build-sshkey 2026-02-05 00:00:29.158312 | orchestrator -> localhost | The key's randomart image is: 2026-02-05 00:00:29.158340 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-05 00:00:29.158358 | orchestrator -> localhost | | ... o. . | 2026-02-05 00:00:29.158376 | orchestrator -> localhost | | . o .... . E | 2026-02-05 00:00:29.158393 | orchestrator -> localhost | | o o o..o | 2026-02-05 00:00:29.158409 | orchestrator -> localhost | | o . +o | 2026-02-05 00:00:29.158425 | orchestrator -> localhost | | o . S.. | 2026-02-05 00:00:29.158447 | orchestrator -> localhost | | o =.*=oo | 2026-02-05 00:00:29.158464 | orchestrator -> localhost | | . = *++O . | 2026-02-05 00:00:29.158480 | orchestrator -> localhost | | ..o *=o* | 2026-02-05 00:00:29.158497 | orchestrator -> localhost | | .o..+B+...o | 2026-02-05 00:00:29.158513 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-05 00:00:29.158560 | orchestrator -> localhost | ok: Runtime: 0:00:01.799031 2026-02-05 00:00:29.164573 | 2026-02-05 00:00:29.164713 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-05 00:00:29.207839 | orchestrator | ok 2026-02-05 00:00:29.229716 | orchestrator | included: /var/lib/zuul/builds/761bba5ad18141e6a739f58740b92162/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-05 00:00:29.283252 | 2026-02-05 00:00:29.283358 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-05 00:00:29.335575 | orchestrator | skipping: Conditional result was False 2026-02-05 00:00:29.342135 | 2026-02-05 00:00:29.342231 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-05 00:00:30.339174 | orchestrator | changed 2026-02-05 00:00:30.349299 | 2026-02-05 00:00:30.349393 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-05 00:00:30.681533 | orchestrator | ok 2026-02-05 00:00:30.686685 | 2026-02-05 00:00:30.686785 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-05 00:00:31.245689 | orchestrator | ok 2026-02-05 00:00:31.263849 | 2026-02-05 00:00:31.263950 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-05 00:00:31.800280 | orchestrator | ok 2026-02-05 00:00:31.807035 | 2026-02-05 00:00:31.807134 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-05 00:00:31.840503 | orchestrator | skipping: Conditional result was False 2026-02-05 00:00:31.846247 | 2026-02-05 00:00:31.846337 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-05 00:00:33.274345 | orchestrator -> localhost | changed 2026-02-05 00:00:33.291607 | 2026-02-05 00:00:33.291701 | TASK [add-build-sshkey : Add back temp key] 2026-02-05 00:00:34.173472 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/761bba5ad18141e6a739f58740b92162/work/761bba5ad18141e6a739f58740b92162_id_rsa (zuul-build-sshkey) 2026-02-05 00:00:34.173699 | orchestrator -> localhost | ok: Runtime: 0:00:00.023933 2026-02-05 00:00:34.180740 | 2026-02-05 00:00:34.180843 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-05 00:00:34.768602 | orchestrator | ok 2026-02-05 00:00:34.784742 | 2026-02-05 00:00:34.784840 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-05 00:00:34.847582 | orchestrator | skipping: Conditional result was False 2026-02-05 00:00:35.009832 | 2026-02-05 00:00:35.009946 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-05 00:00:35.649871 | orchestrator | ok 2026-02-05 00:00:35.704596 | 2026-02-05 00:00:35.706267 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-05 00:00:35.810174 | orchestrator | ok 2026-02-05 00:00:35.836725 | 2026-02-05 00:00:35.837896 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-05 00:00:36.573139 | orchestrator -> localhost | ok 2026-02-05 00:00:36.580273 | 2026-02-05 00:00:36.580363 | TASK [validate-host : Collect information about the host] 2026-02-05 00:00:38.344352 | orchestrator | ok 2026-02-05 00:00:38.377720 | 2026-02-05 00:00:38.377844 | TASK [validate-host : Sanitize hostname] 2026-02-05 00:00:38.479854 | orchestrator | ok 2026-02-05 00:00:38.484976 | 2026-02-05 00:00:38.485094 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-05 00:00:40.214498 | orchestrator -> localhost | changed 2026-02-05 00:00:40.220808 | 2026-02-05 00:00:40.220907 | TASK [validate-host : Collect information about zuul worker] 2026-02-05 00:00:40.988872 | orchestrator | ok 2026-02-05 00:00:40.995112 | 2026-02-05 00:00:40.995207 | TASK [validate-host : Write out all zuul information for each host] 2026-02-05 00:00:41.935608 | orchestrator -> localhost | changed 2026-02-05 00:00:41.944247 | 2026-02-05 00:00:41.944335 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-05 00:00:42.263796 | orchestrator | ok 2026-02-05 00:00:42.268556 | 2026-02-05 00:00:42.268643 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-05 00:02:04.361269 | orchestrator | changed: 2026-02-05 00:02:04.362398 | orchestrator | .d..t...... src/ 2026-02-05 00:02:04.362464 | orchestrator | .d..t...... src/github.com/ 2026-02-05 00:02:04.362498 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-05 00:02:04.362528 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-05 00:02:04.362554 | orchestrator | RedHat.yml 2026-02-05 00:02:04.379220 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-05 00:02:04.379237 | orchestrator | RedHat.yml 2026-02-05 00:02:04.379290 | orchestrator | = 1.53.0"... 2026-02-05 00:02:18.045542 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-02-05 00:02:18.201726 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-05 00:02:18.766687 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-05 00:02:18.834484 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-05 00:02:19.598252 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-05 00:02:19.663609 | orchestrator | - Installing hashicorp/local v2.6.2... 2026-02-05 00:02:20.213035 | orchestrator | - Installed hashicorp/local v2.6.2 (signed, key ID 0C0AF313E5FD9F80) 2026-02-05 00:02:20.213121 | orchestrator | 2026-02-05 00:02:20.213133 | orchestrator | Providers are signed by their developers. 2026-02-05 00:02:20.213143 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-05 00:02:20.213152 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-05 00:02:20.213163 | orchestrator | 2026-02-05 00:02:20.213171 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-05 00:02:20.213180 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-05 00:02:20.213205 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-05 00:02:20.213212 | orchestrator | you run "tofu init" in the future. 2026-02-05 00:02:20.480280 | orchestrator | 2026-02-05 00:02:20.480555 | orchestrator | OpenTofu has been successfully initialized! 2026-02-05 00:02:20.480675 | orchestrator | 2026-02-05 00:02:20.480709 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-05 00:02:20.480729 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-05 00:02:20.480749 | orchestrator | should now work. 2026-02-05 00:02:20.480769 | orchestrator | 2026-02-05 00:02:20.480787 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-05 00:02:20.480806 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-05 00:02:20.480928 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-05 00:02:20.651869 | orchestrator | Created and switched to workspace "ci"! 2026-02-05 00:02:20.651968 | orchestrator | 2026-02-05 00:02:20.651978 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-05 00:02:20.651984 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-05 00:02:20.651988 | orchestrator | for this configuration. 2026-02-05 00:02:20.767971 | orchestrator | ci.auto.tfvars 2026-02-05 00:02:20.787357 | orchestrator | default_custom.tf 2026-02-05 00:02:22.401336 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-05 00:02:22.950463 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-05 00:02:23.194717 | orchestrator | 2026-02-05 00:02:24.148386 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-05 00:02:24.148497 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-05 00:02:24.148515 | orchestrator | + create 2026-02-05 00:02:24.148528 | orchestrator | <= read (data resources) 2026-02-05 00:02:24.148541 | orchestrator | 2026-02-05 00:02:24.148553 | orchestrator | OpenTofu will perform the following actions: 2026-02-05 00:02:24.148564 | orchestrator | 2026-02-05 00:02:24.148576 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-05 00:02:24.148587 | orchestrator | # (config refers to values not yet known) 2026-02-05 00:02:24.148598 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-05 00:02:24.148610 | orchestrator | + checksum = (known after apply) 2026-02-05 00:02:24.148621 | orchestrator | + created_at = (known after apply) 2026-02-05 00:02:24.148632 | orchestrator | + file = (known after apply) 2026-02-05 00:02:24.148643 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.148684 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:24.148696 | orchestrator | + min_disk_gb = (known after apply) 2026-02-05 00:02:24.148707 | orchestrator | + min_ram_mb = (known after apply) 2026-02-05 00:02:24.148718 | orchestrator | + most_recent = true 2026-02-05 00:02:24.148729 | orchestrator | + name = (known after apply) 2026-02-05 00:02:24.148740 | orchestrator | + protected = (known after apply) 2026-02-05 00:02:24.148751 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.148767 | orchestrator | + schema = (known after apply) 2026-02-05 00:02:24.148782 | orchestrator | + size_bytes = (known after apply) 2026-02-05 00:02:24.148799 | orchestrator | + tags = (known after apply) 2026-02-05 00:02:24.148817 | orchestrator | + updated_at = (known after apply) 2026-02-05 00:02:24.148841 | orchestrator | } 2026-02-05 00:02:24.148864 | orchestrator | 2026-02-05 00:02:24.148887 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-05 00:02:24.148916 | orchestrator | # (config refers to values not yet known) 2026-02-05 00:02:24.148935 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-05 00:02:24.148956 | orchestrator | + checksum = (known after apply) 2026-02-05 00:02:24.148976 | orchestrator | + created_at = (known after apply) 2026-02-05 00:02:24.149017 | orchestrator | + file = (known after apply) 2026-02-05 00:02:24.149034 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.149058 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:24.149081 | orchestrator | + min_disk_gb = (known after apply) 2026-02-05 00:02:24.149099 | orchestrator | + min_ram_mb = (known after apply) 2026-02-05 00:02:24.149116 | orchestrator | + most_recent = true 2026-02-05 00:02:24.149133 | orchestrator | + name = (known after apply) 2026-02-05 00:02:24.149150 | orchestrator | + protected = (known after apply) 2026-02-05 00:02:24.149164 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.149180 | orchestrator | + schema = (known after apply) 2026-02-05 00:02:24.149195 | orchestrator | + size_bytes = (known after apply) 2026-02-05 00:02:24.149210 | orchestrator | + tags = (known after apply) 2026-02-05 00:02:24.149225 | orchestrator | + updated_at = (known after apply) 2026-02-05 00:02:24.149240 | orchestrator | } 2026-02-05 00:02:24.149258 | orchestrator | 2026-02-05 00:02:24.149279 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-05 00:02:24.149295 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-05 00:02:24.149312 | orchestrator | + content = (known after apply) 2026-02-05 00:02:24.149330 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-05 00:02:24.149346 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-05 00:02:24.149391 | orchestrator | + content_md5 = (known after apply) 2026-02-05 00:02:24.149410 | orchestrator | + content_sha1 = (known after apply) 2026-02-05 00:02:24.149428 | orchestrator | + content_sha256 = (known after apply) 2026-02-05 00:02:24.149449 | orchestrator | + content_sha512 = (known after apply) 2026-02-05 00:02:24.149476 | orchestrator | + directory_permission = "0777" 2026-02-05 00:02:24.149495 | orchestrator | + file_permission = "0644" 2026-02-05 00:02:24.149516 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-05 00:02:24.149534 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.149561 | orchestrator | } 2026-02-05 00:02:24.149583 | orchestrator | 2026-02-05 00:02:24.149602 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-05 00:02:24.149620 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-05 00:02:24.149640 | orchestrator | + content = (known after apply) 2026-02-05 00:02:24.149657 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-05 00:02:24.149675 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-05 00:02:24.149692 | orchestrator | + content_md5 = (known after apply) 2026-02-05 00:02:24.149711 | orchestrator | + content_sha1 = (known after apply) 2026-02-05 00:02:24.149729 | orchestrator | + content_sha256 = (known after apply) 2026-02-05 00:02:24.149748 | orchestrator | + content_sha512 = (known after apply) 2026-02-05 00:02:24.149768 | orchestrator | + directory_permission = "0777" 2026-02-05 00:02:24.149787 | orchestrator | + file_permission = "0644" 2026-02-05 00:02:24.149821 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-05 00:02:24.149840 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.149856 | orchestrator | } 2026-02-05 00:02:24.149899 | orchestrator | 2026-02-05 00:02:24.149934 | orchestrator | # local_file.inventory will be created 2026-02-05 00:02:24.149952 | orchestrator | + resource "local_file" "inventory" { 2026-02-05 00:02:24.149969 | orchestrator | + content = (known after apply) 2026-02-05 00:02:24.149986 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-05 00:02:24.150003 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-05 00:02:24.150095 | orchestrator | + content_md5 = (known after apply) 2026-02-05 00:02:24.150115 | orchestrator | + content_sha1 = (known after apply) 2026-02-05 00:02:24.150133 | orchestrator | + content_sha256 = (known after apply) 2026-02-05 00:02:24.150151 | orchestrator | + content_sha512 = (known after apply) 2026-02-05 00:02:24.150169 | orchestrator | + directory_permission = "0777" 2026-02-05 00:02:24.150185 | orchestrator | + file_permission = "0644" 2026-02-05 00:02:24.150200 | orchestrator | + filename = "inventory.ci" 2026-02-05 00:02:24.150216 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.150232 | orchestrator | } 2026-02-05 00:02:24.150250 | orchestrator | 2026-02-05 00:02:24.150265 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-05 00:02:24.150281 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-05 00:02:24.150296 | orchestrator | + content = (sensitive value) 2026-02-05 00:02:24.150312 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-05 00:02:24.150331 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-05 00:02:24.150348 | orchestrator | + content_md5 = (known after apply) 2026-02-05 00:02:24.150530 | orchestrator | + content_sha1 = (known after apply) 2026-02-05 00:02:24.150563 | orchestrator | + content_sha256 = (known after apply) 2026-02-05 00:02:24.150575 | orchestrator | + content_sha512 = (known after apply) 2026-02-05 00:02:24.150586 | orchestrator | + directory_permission = "0700" 2026-02-05 00:02:24.150597 | orchestrator | + file_permission = "0600" 2026-02-05 00:02:24.150607 | orchestrator | + filename = ".id_rsa.ci" 2026-02-05 00:02:24.150618 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.150629 | orchestrator | } 2026-02-05 00:02:24.150640 | orchestrator | 2026-02-05 00:02:24.150651 | orchestrator | # null_resource.node_semaphore will be created 2026-02-05 00:02:24.150662 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-05 00:02:24.150673 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.150684 | orchestrator | } 2026-02-05 00:02:24.150695 | orchestrator | 2026-02-05 00:02:24.150706 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-05 00:02:24.150718 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-05 00:02:24.150729 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:24.150740 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.150750 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.150761 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:24.150772 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:24.150783 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-05 00:02:24.150794 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.150805 | orchestrator | + size = 80 2026-02-05 00:02:24.150815 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:24.150826 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:24.150837 | orchestrator | } 2026-02-05 00:02:24.150848 | orchestrator | 2026-02-05 00:02:24.150859 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-05 00:02:24.150870 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-05 00:02:24.150881 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:24.150892 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.150903 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.150927 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:24.150938 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:24.150949 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-05 00:02:24.150960 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.150971 | orchestrator | + size = 80 2026-02-05 00:02:24.150982 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:24.150992 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:24.151003 | orchestrator | } 2026-02-05 00:02:24.151014 | orchestrator | 2026-02-05 00:02:24.151025 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-05 00:02:24.151036 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-05 00:02:24.151046 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:24.151057 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.151068 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.151079 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:24.151090 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:24.151101 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-05 00:02:24.151112 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.151123 | orchestrator | + size = 80 2026-02-05 00:02:24.151134 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:24.151145 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:24.151156 | orchestrator | } 2026-02-05 00:02:24.151166 | orchestrator | 2026-02-05 00:02:24.151177 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-05 00:02:24.151188 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-05 00:02:24.151199 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:24.151210 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.151221 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.151232 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:24.151242 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:24.151253 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-05 00:02:24.151264 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.151275 | orchestrator | + size = 80 2026-02-05 00:02:24.151285 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:24.151296 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:24.151307 | orchestrator | } 2026-02-05 00:02:24.151318 | orchestrator | 2026-02-05 00:02:24.151329 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-05 00:02:24.151340 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-05 00:02:24.151351 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:24.151394 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.151427 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.151438 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:24.151449 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:24.151468 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-05 00:02:24.151479 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.151490 | orchestrator | + size = 80 2026-02-05 00:02:24.151501 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:24.151512 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:24.151522 | orchestrator | } 2026-02-05 00:02:24.151533 | orchestrator | 2026-02-05 00:02:24.151544 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-05 00:02:24.151554 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-05 00:02:24.151565 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:24.151576 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.151587 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.151605 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:24.151615 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:24.151626 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-05 00:02:24.151637 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.151648 | orchestrator | + size = 80 2026-02-05 00:02:24.151658 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:24.151669 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:24.151680 | orchestrator | } 2026-02-05 00:02:24.151691 | orchestrator | 2026-02-05 00:02:24.151701 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-05 00:02:24.151712 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-05 00:02:24.151723 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:24.151734 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.151744 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.151755 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:24.151766 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:24.151777 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-05 00:02:24.151788 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.151798 | orchestrator | + size = 80 2026-02-05 00:02:24.151809 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:24.151819 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:24.151830 | orchestrator | } 2026-02-05 00:02:24.151841 | orchestrator | 2026-02-05 00:02:24.151852 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-05 00:02:24.151864 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 00:02:24.151875 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:24.151885 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.151896 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.151907 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:24.151918 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-05 00:02:24.151929 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.151939 | orchestrator | + size = 20 2026-02-05 00:02:24.151950 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:24.151961 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:24.151971 | orchestrator | } 2026-02-05 00:02:24.151982 | orchestrator | 2026-02-05 00:02:24.151993 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-05 00:02:24.152003 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 00:02:24.152014 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:24.152025 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.152036 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.152046 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:24.152057 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-05 00:02:24.152068 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.152078 | orchestrator | + size = 20 2026-02-05 00:02:24.152089 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:24.152099 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:24.152110 | orchestrator | } 2026-02-05 00:02:24.152121 | orchestrator | 2026-02-05 00:02:24.152132 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-05 00:02:24.152142 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 00:02:24.152153 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:24.152164 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.152174 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.152185 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:24.152196 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-05 00:02:24.152206 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.152225 | orchestrator | + size = 20 2026-02-05 00:02:24.152235 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:24.152246 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:24.152257 | orchestrator | } 2026-02-05 00:02:24.152267 | orchestrator | 2026-02-05 00:02:24.152278 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-05 00:02:24.152289 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 00:02:24.152299 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:24.152310 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.152321 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.152332 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:24.152342 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-05 00:02:24.152353 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.152382 | orchestrator | + size = 20 2026-02-05 00:02:24.152394 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:24.152404 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:24.152415 | orchestrator | } 2026-02-05 00:02:24.152425 | orchestrator | 2026-02-05 00:02:24.152436 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-05 00:02:24.152447 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 00:02:24.152458 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:24.152468 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.152479 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.152490 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:24.152509 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-05 00:02:24.152521 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.152537 | orchestrator | + size = 20 2026-02-05 00:02:24.152549 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:24.152560 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:24.152571 | orchestrator | } 2026-02-05 00:02:24.152581 | orchestrator | 2026-02-05 00:02:24.152592 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-05 00:02:24.152603 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 00:02:24.152614 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:24.152625 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.152636 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.152647 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:24.152657 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-05 00:02:24.152668 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.152679 | orchestrator | + size = 20 2026-02-05 00:02:24.152690 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:24.152701 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:24.152712 | orchestrator | } 2026-02-05 00:02:24.152722 | orchestrator | 2026-02-05 00:02:24.152733 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-05 00:02:24.152744 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 00:02:24.152755 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:24.152766 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.152777 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.152788 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:24.152798 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-05 00:02:24.152809 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.152820 | orchestrator | + size = 20 2026-02-05 00:02:24.152831 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:24.152841 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:24.152852 | orchestrator | } 2026-02-05 00:02:24.152863 | orchestrator | 2026-02-05 00:02:24.152874 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-05 00:02:24.152885 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 00:02:24.152902 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:24.152913 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.152924 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.152935 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:24.152946 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-05 00:02:24.152957 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.152967 | orchestrator | + size = 20 2026-02-05 00:02:24.152978 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:24.152989 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:24.153000 | orchestrator | } 2026-02-05 00:02:24.153010 | orchestrator | 2026-02-05 00:02:24.153021 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-05 00:02:24.153032 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-05 00:02:24.153043 | orchestrator | + attachment = (known after apply) 2026-02-05 00:02:24.153053 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.153064 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.153075 | orchestrator | + metadata = (known after apply) 2026-02-05 00:02:24.153086 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-05 00:02:24.153097 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.153108 | orchestrator | + size = 20 2026-02-05 00:02:24.153118 | orchestrator | + volume_retype_policy = "never" 2026-02-05 00:02:24.153129 | orchestrator | + volume_type = "ssd" 2026-02-05 00:02:24.153140 | orchestrator | } 2026-02-05 00:02:24.153151 | orchestrator | 2026-02-05 00:02:24.153162 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-05 00:02:24.153173 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-05 00:02:24.153183 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 00:02:24.153194 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 00:02:24.153205 | orchestrator | + all_metadata = (known after apply) 2026-02-05 00:02:24.153215 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.153226 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.153240 | orchestrator | + config_drive = true 2026-02-05 00:02:24.153259 | orchestrator | + created = (known after apply) 2026-02-05 00:02:24.153275 | orchestrator | + flavor_id = (known after apply) 2026-02-05 00:02:24.153286 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-05 00:02:24.153296 | orchestrator | + force_delete = false 2026-02-05 00:02:24.153307 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 00:02:24.153318 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.153329 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:24.153339 | orchestrator | + image_name = (known after apply) 2026-02-05 00:02:24.153350 | orchestrator | + key_pair = "testbed" 2026-02-05 00:02:24.153375 | orchestrator | + name = "testbed-manager" 2026-02-05 00:02:24.153387 | orchestrator | + power_state = "active" 2026-02-05 00:02:24.153398 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.153409 | orchestrator | + security_groups = (known after apply) 2026-02-05 00:02:24.153419 | orchestrator | + stop_before_destroy = false 2026-02-05 00:02:24.153430 | orchestrator | + updated = (known after apply) 2026-02-05 00:02:24.153441 | orchestrator | + user_data = (sensitive value) 2026-02-05 00:02:24.153452 | orchestrator | 2026-02-05 00:02:24.153463 | orchestrator | + block_device { 2026-02-05 00:02:24.153474 | orchestrator | + boot_index = 0 2026-02-05 00:02:24.153485 | orchestrator | + delete_on_termination = false 2026-02-05 00:02:24.153501 | orchestrator | + destination_type = "volume" 2026-02-05 00:02:24.153512 | orchestrator | + multiattach = false 2026-02-05 00:02:24.153523 | orchestrator | + source_type = "volume" 2026-02-05 00:02:24.153533 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:24.153551 | orchestrator | } 2026-02-05 00:02:24.153562 | orchestrator | 2026-02-05 00:02:24.153573 | orchestrator | + network { 2026-02-05 00:02:24.153584 | orchestrator | + access_network = false 2026-02-05 00:02:24.153595 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 00:02:24.153606 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 00:02:24.153616 | orchestrator | + mac = (known after apply) 2026-02-05 00:02:24.153634 | orchestrator | + name = (known after apply) 2026-02-05 00:02:24.153645 | orchestrator | + port = (known after apply) 2026-02-05 00:02:24.153656 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:24.153667 | orchestrator | } 2026-02-05 00:02:24.153677 | orchestrator | } 2026-02-05 00:02:24.153688 | orchestrator | 2026-02-05 00:02:24.153699 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-05 00:02:24.153709 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-05 00:02:24.153720 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 00:02:24.153731 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 00:02:24.153741 | orchestrator | + all_metadata = (known after apply) 2026-02-05 00:02:24.153752 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.153762 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.153773 | orchestrator | + config_drive = true 2026-02-05 00:02:24.153784 | orchestrator | + created = (known after apply) 2026-02-05 00:02:24.153794 | orchestrator | + flavor_id = (known after apply) 2026-02-05 00:02:24.153804 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-05 00:02:24.153815 | orchestrator | + force_delete = false 2026-02-05 00:02:24.153826 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 00:02:24.153837 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.153848 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:24.153858 | orchestrator | + image_name = (known after apply) 2026-02-05 00:02:24.153869 | orchestrator | + key_pair = "testbed" 2026-02-05 00:02:24.153879 | orchestrator | + name = "testbed-node-0" 2026-02-05 00:02:24.153890 | orchestrator | + power_state = "active" 2026-02-05 00:02:24.153901 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.153911 | orchestrator | + security_groups = (known after apply) 2026-02-05 00:02:24.153922 | orchestrator | + stop_before_destroy = false 2026-02-05 00:02:24.153932 | orchestrator | + updated = (known after apply) 2026-02-05 00:02:24.153943 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-05 00:02:24.153954 | orchestrator | 2026-02-05 00:02:24.153965 | orchestrator | + block_device { 2026-02-05 00:02:24.153975 | orchestrator | + boot_index = 0 2026-02-05 00:02:24.153986 | orchestrator | + delete_on_termination = false 2026-02-05 00:02:24.153997 | orchestrator | + destination_type = "volume" 2026-02-05 00:02:24.154007 | orchestrator | + multiattach = false 2026-02-05 00:02:24.154062 | orchestrator | + source_type = "volume" 2026-02-05 00:02:24.154073 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:24.154096 | orchestrator | } 2026-02-05 00:02:24.154107 | orchestrator | 2026-02-05 00:02:24.154118 | orchestrator | + network { 2026-02-05 00:02:24.154129 | orchestrator | + access_network = false 2026-02-05 00:02:24.154139 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 00:02:24.154151 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 00:02:24.154161 | orchestrator | + mac = (known after apply) 2026-02-05 00:02:24.154172 | orchestrator | + name = (known after apply) 2026-02-05 00:02:24.154183 | orchestrator | + port = (known after apply) 2026-02-05 00:02:24.154193 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:24.154204 | orchestrator | } 2026-02-05 00:02:24.154215 | orchestrator | } 2026-02-05 00:02:24.154226 | orchestrator | 2026-02-05 00:02:24.154237 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-05 00:02:24.154248 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-05 00:02:24.154258 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 00:02:24.154277 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 00:02:24.154288 | orchestrator | + all_metadata = (known after apply) 2026-02-05 00:02:24.154298 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.154309 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.154320 | orchestrator | + config_drive = true 2026-02-05 00:02:24.154331 | orchestrator | + created = (known after apply) 2026-02-05 00:02:24.154341 | orchestrator | + flavor_id = (known after apply) 2026-02-05 00:02:24.154352 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-05 00:02:24.154417 | orchestrator | + force_delete = false 2026-02-05 00:02:24.154429 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 00:02:24.154440 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.154451 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:24.154461 | orchestrator | + image_name = (known after apply) 2026-02-05 00:02:24.154472 | orchestrator | + key_pair = "testbed" 2026-02-05 00:02:24.154483 | orchestrator | + name = "testbed-node-1" 2026-02-05 00:02:24.154493 | orchestrator | + power_state = "active" 2026-02-05 00:02:24.154504 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.154514 | orchestrator | + security_groups = (known after apply) 2026-02-05 00:02:24.154525 | orchestrator | + stop_before_destroy = false 2026-02-05 00:02:24.154536 | orchestrator | + updated = (known after apply) 2026-02-05 00:02:24.154547 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-05 00:02:24.154557 | orchestrator | 2026-02-05 00:02:24.154568 | orchestrator | + block_device { 2026-02-05 00:02:24.154579 | orchestrator | + boot_index = 0 2026-02-05 00:02:24.154589 | orchestrator | + delete_on_termination = false 2026-02-05 00:02:24.154600 | orchestrator | + destination_type = "volume" 2026-02-05 00:02:24.154611 | orchestrator | + multiattach = false 2026-02-05 00:02:24.154621 | orchestrator | + source_type = "volume" 2026-02-05 00:02:24.154632 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:24.154643 | orchestrator | } 2026-02-05 00:02:24.154654 | orchestrator | 2026-02-05 00:02:24.154664 | orchestrator | + network { 2026-02-05 00:02:24.154675 | orchestrator | + access_network = false 2026-02-05 00:02:24.154686 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 00:02:24.154696 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 00:02:24.154707 | orchestrator | + mac = (known after apply) 2026-02-05 00:02:24.154718 | orchestrator | + name = (known after apply) 2026-02-05 00:02:24.154728 | orchestrator | + port = (known after apply) 2026-02-05 00:02:24.154739 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:24.154750 | orchestrator | } 2026-02-05 00:02:24.154760 | orchestrator | } 2026-02-05 00:02:24.154771 | orchestrator | 2026-02-05 00:02:24.154782 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-05 00:02:24.154792 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-05 00:02:24.154803 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 00:02:24.154814 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 00:02:24.154825 | orchestrator | + all_metadata = (known after apply) 2026-02-05 00:02:24.154844 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.154861 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.154872 | orchestrator | + config_drive = true 2026-02-05 00:02:24.154883 | orchestrator | + created = (known after apply) 2026-02-05 00:02:24.154894 | orchestrator | + flavor_id = (known after apply) 2026-02-05 00:02:24.154904 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-05 00:02:24.154915 | orchestrator | + force_delete = false 2026-02-05 00:02:24.154926 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 00:02:24.154936 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.154946 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:24.154962 | orchestrator | + image_name = (known after apply) 2026-02-05 00:02:24.154972 | orchestrator | + key_pair = "testbed" 2026-02-05 00:02:24.154981 | orchestrator | + name = "testbed-node-2" 2026-02-05 00:02:24.154991 | orchestrator | + power_state = "active" 2026-02-05 00:02:24.155000 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.155009 | orchestrator | + security_groups = (known after apply) 2026-02-05 00:02:24.155019 | orchestrator | + stop_before_destroy = false 2026-02-05 00:02:24.155028 | orchestrator | + updated = (known after apply) 2026-02-05 00:02:24.155038 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-05 00:02:24.155047 | orchestrator | 2026-02-05 00:02:24.155057 | orchestrator | + block_device { 2026-02-05 00:02:24.155066 | orchestrator | + boot_index = 0 2026-02-05 00:02:24.155075 | orchestrator | + delete_on_termination = false 2026-02-05 00:02:24.155085 | orchestrator | + destination_type = "volume" 2026-02-05 00:02:24.155094 | orchestrator | + multiattach = false 2026-02-05 00:02:24.155104 | orchestrator | + source_type = "volume" 2026-02-05 00:02:24.155113 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:24.155123 | orchestrator | } 2026-02-05 00:02:24.155132 | orchestrator | 2026-02-05 00:02:24.155142 | orchestrator | + network { 2026-02-05 00:02:24.155151 | orchestrator | + access_network = false 2026-02-05 00:02:24.155161 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 00:02:24.155170 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 00:02:24.155180 | orchestrator | + mac = (known after apply) 2026-02-05 00:02:24.155189 | orchestrator | + name = (known after apply) 2026-02-05 00:02:24.155199 | orchestrator | + port = (known after apply) 2026-02-05 00:02:24.155208 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:24.155217 | orchestrator | } 2026-02-05 00:02:24.155227 | orchestrator | } 2026-02-05 00:02:24.155237 | orchestrator | 2026-02-05 00:02:24.155246 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-05 00:02:24.155256 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-05 00:02:24.155265 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 00:02:24.155274 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 00:02:24.155284 | orchestrator | + all_metadata = (known after apply) 2026-02-05 00:02:24.155293 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.155303 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.155312 | orchestrator | + config_drive = true 2026-02-05 00:02:24.155322 | orchestrator | + created = (known after apply) 2026-02-05 00:02:24.155331 | orchestrator | + flavor_id = (known after apply) 2026-02-05 00:02:24.155341 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-05 00:02:24.155350 | orchestrator | + force_delete = false 2026-02-05 00:02:24.155375 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 00:02:24.155385 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.155394 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:24.155404 | orchestrator | + image_name = (known after apply) 2026-02-05 00:02:24.155413 | orchestrator | + key_pair = "testbed" 2026-02-05 00:02:24.155423 | orchestrator | + name = "testbed-node-3" 2026-02-05 00:02:24.155432 | orchestrator | + power_state = "active" 2026-02-05 00:02:24.155442 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.155451 | orchestrator | + security_groups = (known after apply) 2026-02-05 00:02:24.155460 | orchestrator | + stop_before_destroy = false 2026-02-05 00:02:24.155469 | orchestrator | + updated = (known after apply) 2026-02-05 00:02:24.155479 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-05 00:02:24.155488 | orchestrator | 2026-02-05 00:02:24.155498 | orchestrator | + block_device { 2026-02-05 00:02:24.155519 | orchestrator | + boot_index = 0 2026-02-05 00:02:24.155529 | orchestrator | + delete_on_termination = false 2026-02-05 00:02:24.155538 | orchestrator | + destination_type = "volume" 2026-02-05 00:02:24.155554 | orchestrator | + multiattach = false 2026-02-05 00:02:24.155563 | orchestrator | + source_type = "volume" 2026-02-05 00:02:24.155572 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:24.155582 | orchestrator | } 2026-02-05 00:02:24.155591 | orchestrator | 2026-02-05 00:02:24.155601 | orchestrator | + network { 2026-02-05 00:02:24.155610 | orchestrator | + access_network = false 2026-02-05 00:02:24.155620 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 00:02:24.155630 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 00:02:24.155639 | orchestrator | + mac = (known after apply) 2026-02-05 00:02:24.155648 | orchestrator | + name = (known after apply) 2026-02-05 00:02:24.155658 | orchestrator | + port = (known after apply) 2026-02-05 00:02:24.155667 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:24.155677 | orchestrator | } 2026-02-05 00:02:24.155686 | orchestrator | } 2026-02-05 00:02:24.155696 | orchestrator | 2026-02-05 00:02:24.155705 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-05 00:02:24.155715 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-05 00:02:24.155724 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 00:02:24.155734 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 00:02:24.155743 | orchestrator | + all_metadata = (known after apply) 2026-02-05 00:02:24.155753 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.155762 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.155772 | orchestrator | + config_drive = true 2026-02-05 00:02:24.155781 | orchestrator | + created = (known after apply) 2026-02-05 00:02:24.155791 | orchestrator | + flavor_id = (known after apply) 2026-02-05 00:02:24.155800 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-05 00:02:24.155810 | orchestrator | + force_delete = false 2026-02-05 00:02:24.155819 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 00:02:24.155829 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.155838 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:24.155854 | orchestrator | + image_name = (known after apply) 2026-02-05 00:02:24.155864 | orchestrator | + key_pair = "testbed" 2026-02-05 00:02:24.155873 | orchestrator | + name = "testbed-node-4" 2026-02-05 00:02:24.155883 | orchestrator | + power_state = "active" 2026-02-05 00:02:24.155892 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.155901 | orchestrator | + security_groups = (known after apply) 2026-02-05 00:02:24.155911 | orchestrator | + stop_before_destroy = false 2026-02-05 00:02:24.155920 | orchestrator | + updated = (known after apply) 2026-02-05 00:02:24.155930 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-05 00:02:24.155939 | orchestrator | 2026-02-05 00:02:24.155949 | orchestrator | + block_device { 2026-02-05 00:02:24.155958 | orchestrator | + boot_index = 0 2026-02-05 00:02:24.155968 | orchestrator | + delete_on_termination = false 2026-02-05 00:02:24.155977 | orchestrator | + destination_type = "volume" 2026-02-05 00:02:24.155986 | orchestrator | + multiattach = false 2026-02-05 00:02:24.155996 | orchestrator | + source_type = "volume" 2026-02-05 00:02:24.156005 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:24.156015 | orchestrator | } 2026-02-05 00:02:24.156024 | orchestrator | 2026-02-05 00:02:24.156034 | orchestrator | + network { 2026-02-05 00:02:24.156044 | orchestrator | + access_network = false 2026-02-05 00:02:24.156053 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 00:02:24.156063 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 00:02:24.156072 | orchestrator | + mac = (known after apply) 2026-02-05 00:02:24.156081 | orchestrator | + name = (known after apply) 2026-02-05 00:02:24.156091 | orchestrator | + port = (known after apply) 2026-02-05 00:02:24.156100 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:24.156110 | orchestrator | } 2026-02-05 00:02:24.156119 | orchestrator | } 2026-02-05 00:02:24.156134 | orchestrator | 2026-02-05 00:02:24.156144 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-05 00:02:24.156153 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-05 00:02:24.156163 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-05 00:02:24.156172 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-05 00:02:24.156182 | orchestrator | + all_metadata = (known after apply) 2026-02-05 00:02:24.156191 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.156201 | orchestrator | + availability_zone = "nova" 2026-02-05 00:02:24.156210 | orchestrator | + config_drive = true 2026-02-05 00:02:24.156220 | orchestrator | + created = (known after apply) 2026-02-05 00:02:24.156229 | orchestrator | + flavor_id = (known after apply) 2026-02-05 00:02:24.156239 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-05 00:02:24.156248 | orchestrator | + force_delete = false 2026-02-05 00:02:24.156262 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-05 00:02:24.156272 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.156281 | orchestrator | + image_id = (known after apply) 2026-02-05 00:02:24.156291 | orchestrator | + image_name = (known after apply) 2026-02-05 00:02:24.156300 | orchestrator | + key_pair = "testbed" 2026-02-05 00:02:24.156309 | orchestrator | + name = "testbed-node-5" 2026-02-05 00:02:24.156319 | orchestrator | + power_state = "active" 2026-02-05 00:02:24.156328 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.156337 | orchestrator | + security_groups = (known after apply) 2026-02-05 00:02:24.156347 | orchestrator | + stop_before_destroy = false 2026-02-05 00:02:24.156356 | orchestrator | + updated = (known after apply) 2026-02-05 00:02:24.156381 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-05 00:02:24.156391 | orchestrator | 2026-02-05 00:02:24.156400 | orchestrator | + block_device { 2026-02-05 00:02:24.156410 | orchestrator | + boot_index = 0 2026-02-05 00:02:24.156419 | orchestrator | + delete_on_termination = false 2026-02-05 00:02:24.156429 | orchestrator | + destination_type = "volume" 2026-02-05 00:02:24.156438 | orchestrator | + multiattach = false 2026-02-05 00:02:24.156447 | orchestrator | + source_type = "volume" 2026-02-05 00:02:24.156457 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:24.156467 | orchestrator | } 2026-02-05 00:02:24.156476 | orchestrator | 2026-02-05 00:02:24.156486 | orchestrator | + network { 2026-02-05 00:02:24.156495 | orchestrator | + access_network = false 2026-02-05 00:02:24.156505 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-05 00:02:24.156514 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-05 00:02:24.156524 | orchestrator | + mac = (known after apply) 2026-02-05 00:02:24.156533 | orchestrator | + name = (known after apply) 2026-02-05 00:02:24.156543 | orchestrator | + port = (known after apply) 2026-02-05 00:02:24.156552 | orchestrator | + uuid = (known after apply) 2026-02-05 00:02:24.156562 | orchestrator | } 2026-02-05 00:02:24.156571 | orchestrator | } 2026-02-05 00:02:24.156581 | orchestrator | 2026-02-05 00:02:24.156590 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-05 00:02:24.156599 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-05 00:02:24.156609 | orchestrator | + fingerprint = (known after apply) 2026-02-05 00:02:24.156619 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.156628 | orchestrator | + name = "testbed" 2026-02-05 00:02:24.156637 | orchestrator | + private_key = (sensitive value) 2026-02-05 00:02:24.156647 | orchestrator | + public_key = (known after apply) 2026-02-05 00:02:24.156656 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.156666 | orchestrator | + user_id = (known after apply) 2026-02-05 00:02:24.156675 | orchestrator | } 2026-02-05 00:02:24.156685 | orchestrator | 2026-02-05 00:02:24.156694 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-05 00:02:24.156704 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 00:02:24.156720 | orchestrator | + device = (known after apply) 2026-02-05 00:02:24.156729 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.156739 | orchestrator | + instance_id = (known after apply) 2026-02-05 00:02:24.156748 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.156757 | orchestrator | + volume_id = (known after apply) 2026-02-05 00:02:24.156767 | orchestrator | } 2026-02-05 00:02:24.156776 | orchestrator | 2026-02-05 00:02:24.156786 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-05 00:02:24.156795 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 00:02:24.156805 | orchestrator | + device = (known after apply) 2026-02-05 00:02:24.156814 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.156824 | orchestrator | + instance_id = (known after apply) 2026-02-05 00:02:24.156833 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.156843 | orchestrator | + volume_id = (known after apply) 2026-02-05 00:02:24.156852 | orchestrator | } 2026-02-05 00:02:24.156862 | orchestrator | 2026-02-05 00:02:24.156878 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-05 00:02:24.156888 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 00:02:24.156897 | orchestrator | + device = (known after apply) 2026-02-05 00:02:24.156907 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.156917 | orchestrator | + instance_id = (known after apply) 2026-02-05 00:02:24.156926 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.156936 | orchestrator | + volume_id = (known after apply) 2026-02-05 00:02:24.156945 | orchestrator | } 2026-02-05 00:02:24.156955 | orchestrator | 2026-02-05 00:02:24.156964 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-05 00:02:24.156974 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 00:02:24.156983 | orchestrator | + device = (known after apply) 2026-02-05 00:02:24.156993 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.157002 | orchestrator | + instance_id = (known after apply) 2026-02-05 00:02:24.157011 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.157021 | orchestrator | + volume_id = (known after apply) 2026-02-05 00:02:24.157030 | orchestrator | } 2026-02-05 00:02:24.157040 | orchestrator | 2026-02-05 00:02:24.157049 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-05 00:02:24.157059 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 00:02:24.157069 | orchestrator | + device = (known after apply) 2026-02-05 00:02:24.157078 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.157088 | orchestrator | + instance_id = (known after apply) 2026-02-05 00:02:24.157102 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.157112 | orchestrator | + volume_id = (known after apply) 2026-02-05 00:02:24.157122 | orchestrator | } 2026-02-05 00:02:24.157132 | orchestrator | 2026-02-05 00:02:24.157141 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-05 00:02:24.157151 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 00:02:24.157161 | orchestrator | + device = (known after apply) 2026-02-05 00:02:24.157170 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.157180 | orchestrator | + instance_id = (known after apply) 2026-02-05 00:02:24.157189 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.157198 | orchestrator | + volume_id = (known after apply) 2026-02-05 00:02:24.157208 | orchestrator | } 2026-02-05 00:02:24.157217 | orchestrator | 2026-02-05 00:02:24.157227 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-05 00:02:24.157237 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 00:02:24.157246 | orchestrator | + device = (known after apply) 2026-02-05 00:02:24.157256 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.157265 | orchestrator | + instance_id = (known after apply) 2026-02-05 00:02:24.157275 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.157290 | orchestrator | + volume_id = (known after apply) 2026-02-05 00:02:24.157299 | orchestrator | } 2026-02-05 00:02:24.157309 | orchestrator | 2026-02-05 00:02:24.157319 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-05 00:02:24.157328 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 00:02:24.157338 | orchestrator | + device = (known after apply) 2026-02-05 00:02:24.157347 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.157357 | orchestrator | + instance_id = (known after apply) 2026-02-05 00:02:24.157384 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.157394 | orchestrator | + volume_id = (known after apply) 2026-02-05 00:02:24.157404 | orchestrator | } 2026-02-05 00:02:24.157413 | orchestrator | 2026-02-05 00:02:24.157423 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-05 00:02:24.157432 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-05 00:02:24.157442 | orchestrator | + device = (known after apply) 2026-02-05 00:02:24.157451 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.157461 | orchestrator | + instance_id = (known after apply) 2026-02-05 00:02:24.157470 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.157479 | orchestrator | + volume_id = (known after apply) 2026-02-05 00:02:24.157489 | orchestrator | } 2026-02-05 00:02:24.157499 | orchestrator | 2026-02-05 00:02:24.157508 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-05 00:02:24.157519 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-05 00:02:24.157528 | orchestrator | + fixed_ip = (known after apply) 2026-02-05 00:02:24.157538 | orchestrator | + floating_ip = (known after apply) 2026-02-05 00:02:24.157547 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.157557 | orchestrator | + port_id = (known after apply) 2026-02-05 00:02:24.157566 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.157575 | orchestrator | } 2026-02-05 00:02:24.157585 | orchestrator | 2026-02-05 00:02:24.157594 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-05 00:02:24.157604 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-05 00:02:24.157613 | orchestrator | + address = (known after apply) 2026-02-05 00:02:24.157623 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.157632 | orchestrator | + dns_domain = (known after apply) 2026-02-05 00:02:24.157642 | orchestrator | + dns_name = (known after apply) 2026-02-05 00:02:24.157651 | orchestrator | + fixed_ip = (known after apply) 2026-02-05 00:02:24.157661 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.157670 | orchestrator | + pool = "public" 2026-02-05 00:02:24.157680 | orchestrator | + port_id = (known after apply) 2026-02-05 00:02:24.157689 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.157699 | orchestrator | + subnet_id = (known after apply) 2026-02-05 00:02:24.157708 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.157717 | orchestrator | } 2026-02-05 00:02:24.157727 | orchestrator | 2026-02-05 00:02:24.157736 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-05 00:02:24.157746 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-05 00:02:24.157756 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 00:02:24.157765 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.157775 | orchestrator | + availability_zone_hints = [ 2026-02-05 00:02:24.157785 | orchestrator | + "nova", 2026-02-05 00:02:24.157794 | orchestrator | ] 2026-02-05 00:02:24.157804 | orchestrator | + dns_domain = (known after apply) 2026-02-05 00:02:24.157814 | orchestrator | + external = (known after apply) 2026-02-05 00:02:24.157823 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.157838 | orchestrator | + mtu = (known after apply) 2026-02-05 00:02:24.157848 | orchestrator | + name = "net-testbed-management" 2026-02-05 00:02:24.157858 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 00:02:24.157873 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 00:02:24.157883 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.157893 | orchestrator | + shared = (known after apply) 2026-02-05 00:02:24.157902 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.157912 | orchestrator | + transparent_vlan = (known after apply) 2026-02-05 00:02:24.157921 | orchestrator | 2026-02-05 00:02:24.157931 | orchestrator | + segments (known after apply) 2026-02-05 00:02:24.157940 | orchestrator | } 2026-02-05 00:02:24.157950 | orchestrator | 2026-02-05 00:02:24.157959 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-05 00:02:24.157968 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-05 00:02:24.157978 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 00:02:24.157987 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 00:02:24.157997 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 00:02:24.158011 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.158070 | orchestrator | + device_id = (known after apply) 2026-02-05 00:02:24.158079 | orchestrator | + device_owner = (known after apply) 2026-02-05 00:02:24.158089 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 00:02:24.158098 | orchestrator | + dns_name = (known after apply) 2026-02-05 00:02:24.158108 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.158117 | orchestrator | + mac_address = (known after apply) 2026-02-05 00:02:24.158127 | orchestrator | + network_id = (known after apply) 2026-02-05 00:02:24.158136 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 00:02:24.158146 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 00:02:24.158155 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.158165 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 00:02:24.158174 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.158183 | orchestrator | 2026-02-05 00:02:24.158193 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.158202 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 00:02:24.158212 | orchestrator | } 2026-02-05 00:02:24.158222 | orchestrator | 2026-02-05 00:02:24.158231 | orchestrator | + binding (known after apply) 2026-02-05 00:02:24.158241 | orchestrator | 2026-02-05 00:02:24.158251 | orchestrator | + fixed_ip { 2026-02-05 00:02:24.158260 | orchestrator | + ip_address = "192.168.16.5" 2026-02-05 00:02:24.158270 | orchestrator | + subnet_id = (known after apply) 2026-02-05 00:02:24.158280 | orchestrator | } 2026-02-05 00:02:24.158290 | orchestrator | } 2026-02-05 00:02:24.158299 | orchestrator | 2026-02-05 00:02:24.158309 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-05 00:02:24.158318 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-05 00:02:24.158328 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 00:02:24.158338 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 00:02:24.158347 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 00:02:24.158357 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.158389 | orchestrator | + device_id = (known after apply) 2026-02-05 00:02:24.158399 | orchestrator | + device_owner = (known after apply) 2026-02-05 00:02:24.158409 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 00:02:24.158418 | orchestrator | + dns_name = (known after apply) 2026-02-05 00:02:24.158428 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.158437 | orchestrator | + mac_address = (known after apply) 2026-02-05 00:02:24.158447 | orchestrator | + network_id = (known after apply) 2026-02-05 00:02:24.158456 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 00:02:24.158466 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 00:02:24.158475 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.158491 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 00:02:24.158501 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.158511 | orchestrator | 2026-02-05 00:02:24.158520 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.158530 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-05 00:02:24.158539 | orchestrator | } 2026-02-05 00:02:24.158549 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.158558 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 00:02:24.158568 | orchestrator | } 2026-02-05 00:02:24.158577 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.158587 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-05 00:02:24.158596 | orchestrator | } 2026-02-05 00:02:24.158606 | orchestrator | 2026-02-05 00:02:24.158616 | orchestrator | + binding (known after apply) 2026-02-05 00:02:24.158625 | orchestrator | 2026-02-05 00:02:24.158634 | orchestrator | + fixed_ip { 2026-02-05 00:02:24.158644 | orchestrator | + ip_address = "192.168.16.10" 2026-02-05 00:02:24.158653 | orchestrator | + subnet_id = (known after apply) 2026-02-05 00:02:24.158663 | orchestrator | } 2026-02-05 00:02:24.158672 | orchestrator | } 2026-02-05 00:02:24.158682 | orchestrator | 2026-02-05 00:02:24.158692 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-05 00:02:24.158701 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-05 00:02:24.158711 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 00:02:24.158720 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 00:02:24.158730 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 00:02:24.158740 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.158749 | orchestrator | + device_id = (known after apply) 2026-02-05 00:02:24.158759 | orchestrator | + device_owner = (known after apply) 2026-02-05 00:02:24.158768 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 00:02:24.158778 | orchestrator | + dns_name = (known after apply) 2026-02-05 00:02:24.158787 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.158797 | orchestrator | + mac_address = (known after apply) 2026-02-05 00:02:24.158806 | orchestrator | + network_id = (known after apply) 2026-02-05 00:02:24.158816 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 00:02:24.158825 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 00:02:24.158835 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.158844 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 00:02:24.158854 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.158863 | orchestrator | 2026-02-05 00:02:24.158881 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.158891 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-05 00:02:24.158901 | orchestrator | } 2026-02-05 00:02:24.158910 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.158920 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 00:02:24.158929 | orchestrator | } 2026-02-05 00:02:24.158939 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.158949 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-05 00:02:24.158958 | orchestrator | } 2026-02-05 00:02:24.158968 | orchestrator | 2026-02-05 00:02:24.158977 | orchestrator | + binding (known after apply) 2026-02-05 00:02:24.158987 | orchestrator | 2026-02-05 00:02:24.158996 | orchestrator | + fixed_ip { 2026-02-05 00:02:24.159006 | orchestrator | + ip_address = "192.168.16.11" 2026-02-05 00:02:24.159015 | orchestrator | + subnet_id = (known after apply) 2026-02-05 00:02:24.159025 | orchestrator | } 2026-02-05 00:02:24.159034 | orchestrator | } 2026-02-05 00:02:24.159043 | orchestrator | 2026-02-05 00:02:24.159053 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-05 00:02:24.159063 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-05 00:02:24.159072 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 00:02:24.159082 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 00:02:24.159092 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 00:02:24.159101 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.159117 | orchestrator | + device_id = (known after apply) 2026-02-05 00:02:24.159127 | orchestrator | + device_owner = (known after apply) 2026-02-05 00:02:24.159136 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 00:02:24.159146 | orchestrator | + dns_name = (known after apply) 2026-02-05 00:02:24.159160 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.159170 | orchestrator | + mac_address = (known after apply) 2026-02-05 00:02:24.159180 | orchestrator | + network_id = (known after apply) 2026-02-05 00:02:24.159189 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 00:02:24.159199 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 00:02:24.159209 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.159218 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 00:02:24.159227 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.159237 | orchestrator | 2026-02-05 00:02:24.159246 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.159256 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-05 00:02:24.159265 | orchestrator | } 2026-02-05 00:02:24.159275 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.159285 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 00:02:24.159294 | orchestrator | } 2026-02-05 00:02:24.159304 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.159314 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-05 00:02:24.159323 | orchestrator | } 2026-02-05 00:02:24.159332 | orchestrator | 2026-02-05 00:02:24.159342 | orchestrator | + binding (known after apply) 2026-02-05 00:02:24.159351 | orchestrator | 2026-02-05 00:02:24.159377 | orchestrator | + fixed_ip { 2026-02-05 00:02:24.159387 | orchestrator | + ip_address = "192.168.16.12" 2026-02-05 00:02:24.159397 | orchestrator | + subnet_id = (known after apply) 2026-02-05 00:02:24.159407 | orchestrator | } 2026-02-05 00:02:24.159416 | orchestrator | } 2026-02-05 00:02:24.159426 | orchestrator | 2026-02-05 00:02:24.159442 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-05 00:02:24.159458 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-05 00:02:24.159474 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 00:02:24.159490 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 00:02:24.159506 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 00:02:24.159523 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.159540 | orchestrator | + device_id = (known after apply) 2026-02-05 00:02:24.159552 | orchestrator | + device_owner = (known after apply) 2026-02-05 00:02:24.159561 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 00:02:24.159570 | orchestrator | + dns_name = (known after apply) 2026-02-05 00:02:24.159580 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.159589 | orchestrator | + mac_address = (known after apply) 2026-02-05 00:02:24.159598 | orchestrator | + network_id = (known after apply) 2026-02-05 00:02:24.159608 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 00:02:24.159617 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 00:02:24.159627 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.159636 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 00:02:24.159645 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.159655 | orchestrator | 2026-02-05 00:02:24.159665 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.159674 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-05 00:02:24.159684 | orchestrator | } 2026-02-05 00:02:24.159693 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.159703 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 00:02:24.159712 | orchestrator | } 2026-02-05 00:02:24.159722 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.159731 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-05 00:02:24.159741 | orchestrator | } 2026-02-05 00:02:24.159750 | orchestrator | 2026-02-05 00:02:24.159766 | orchestrator | + binding (known after apply) 2026-02-05 00:02:24.159776 | orchestrator | 2026-02-05 00:02:24.159785 | orchestrator | + fixed_ip { 2026-02-05 00:02:24.159795 | orchestrator | + ip_address = "192.168.16.13" 2026-02-05 00:02:24.159804 | orchestrator | + subnet_id = (known after apply) 2026-02-05 00:02:24.159814 | orchestrator | } 2026-02-05 00:02:24.159823 | orchestrator | } 2026-02-05 00:02:24.159833 | orchestrator | 2026-02-05 00:02:24.159842 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-05 00:02:24.159852 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-05 00:02:24.159861 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 00:02:24.159871 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 00:02:24.159880 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 00:02:24.159890 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.159899 | orchestrator | + device_id = (known after apply) 2026-02-05 00:02:24.159908 | orchestrator | + device_owner = (known after apply) 2026-02-05 00:02:24.159918 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 00:02:24.159927 | orchestrator | + dns_name = (known after apply) 2026-02-05 00:02:24.159936 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.159946 | orchestrator | + mac_address = (known after apply) 2026-02-05 00:02:24.159955 | orchestrator | + network_id = (known after apply) 2026-02-05 00:02:24.159965 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 00:02:24.159974 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 00:02:24.159990 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.160000 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 00:02:24.160010 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.160020 | orchestrator | 2026-02-05 00:02:24.160030 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.160039 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-05 00:02:24.160049 | orchestrator | } 2026-02-05 00:02:24.160058 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.160067 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 00:02:24.160077 | orchestrator | } 2026-02-05 00:02:24.160086 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.160096 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-05 00:02:24.160105 | orchestrator | } 2026-02-05 00:02:24.160114 | orchestrator | 2026-02-05 00:02:24.160124 | orchestrator | + binding (known after apply) 2026-02-05 00:02:24.160133 | orchestrator | 2026-02-05 00:02:24.160143 | orchestrator | + fixed_ip { 2026-02-05 00:02:24.160152 | orchestrator | + ip_address = "192.168.16.14" 2026-02-05 00:02:24.160161 | orchestrator | + subnet_id = (known after apply) 2026-02-05 00:02:24.160171 | orchestrator | } 2026-02-05 00:02:24.160180 | orchestrator | } 2026-02-05 00:02:24.160189 | orchestrator | 2026-02-05 00:02:24.160199 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-05 00:02:24.160208 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-05 00:02:24.160218 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 00:02:24.160227 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-05 00:02:24.160237 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-05 00:02:24.160246 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.160256 | orchestrator | + device_id = (known after apply) 2026-02-05 00:02:24.160265 | orchestrator | + device_owner = (known after apply) 2026-02-05 00:02:24.160275 | orchestrator | + dns_assignment = (known after apply) 2026-02-05 00:02:24.160284 | orchestrator | + dns_name = (known after apply) 2026-02-05 00:02:24.160294 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.160303 | orchestrator | + mac_address = (known after apply) 2026-02-05 00:02:24.160312 | orchestrator | + network_id = (known after apply) 2026-02-05 00:02:24.160322 | orchestrator | + port_security_enabled = (known after apply) 2026-02-05 00:02:24.160331 | orchestrator | + qos_policy_id = (known after apply) 2026-02-05 00:02:24.160346 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.160356 | orchestrator | + security_group_ids = (known after apply) 2026-02-05 00:02:24.160382 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.160392 | orchestrator | 2026-02-05 00:02:24.160402 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.160411 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-05 00:02:24.160421 | orchestrator | } 2026-02-05 00:02:24.160430 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.160440 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-05 00:02:24.160449 | orchestrator | } 2026-02-05 00:02:24.160459 | orchestrator | + allowed_address_pairs { 2026-02-05 00:02:24.160468 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-05 00:02:24.160478 | orchestrator | } 2026-02-05 00:02:24.160487 | orchestrator | 2026-02-05 00:02:24.160503 | orchestrator | + binding (known after apply) 2026-02-05 00:02:24.160513 | orchestrator | 2026-02-05 00:02:24.160522 | orchestrator | + fixed_ip { 2026-02-05 00:02:24.160532 | orchestrator | + ip_address = "192.168.16.15" 2026-02-05 00:02:24.160541 | orchestrator | + subnet_id = (known after apply) 2026-02-05 00:02:24.160551 | orchestrator | } 2026-02-05 00:02:24.160560 | orchestrator | } 2026-02-05 00:02:24.160570 | orchestrator | 2026-02-05 00:02:24.160580 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-05 00:02:24.160589 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-05 00:02:24.160599 | orchestrator | + force_destroy = false 2026-02-05 00:02:24.160609 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.160618 | orchestrator | + port_id = (known after apply) 2026-02-05 00:02:24.160627 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.160637 | orchestrator | + router_id = (known after apply) 2026-02-05 00:02:24.160646 | orchestrator | + subnet_id = (known after apply) 2026-02-05 00:02:24.160656 | orchestrator | } 2026-02-05 00:02:24.160665 | orchestrator | 2026-02-05 00:02:24.160675 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-05 00:02:24.160684 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-05 00:02:24.160694 | orchestrator | + admin_state_up = (known after apply) 2026-02-05 00:02:24.160703 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.160713 | orchestrator | + availability_zone_hints = [ 2026-02-05 00:02:24.160722 | orchestrator | + "nova", 2026-02-05 00:02:24.160732 | orchestrator | ] 2026-02-05 00:02:24.160741 | orchestrator | + distributed = (known after apply) 2026-02-05 00:02:24.160751 | orchestrator | + enable_snat = (known after apply) 2026-02-05 00:02:24.160761 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-05 00:02:24.160770 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-05 00:02:24.160780 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.160789 | orchestrator | + name = "testbed" 2026-02-05 00:02:24.160799 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.160808 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.160818 | orchestrator | 2026-02-05 00:02:24.160828 | orchestrator | + external_fixed_ip (known after apply) 2026-02-05 00:02:24.160837 | orchestrator | } 2026-02-05 00:02:24.160847 | orchestrator | 2026-02-05 00:02:24.160857 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-05 00:02:24.160866 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-05 00:02:24.160876 | orchestrator | + description = "ssh" 2026-02-05 00:02:24.160886 | orchestrator | + direction = "ingress" 2026-02-05 00:02:24.160895 | orchestrator | + ethertype = "IPv4" 2026-02-05 00:02:24.160905 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.160914 | orchestrator | + port_range_max = 22 2026-02-05 00:02:24.160924 | orchestrator | + port_range_min = 22 2026-02-05 00:02:24.160933 | orchestrator | + protocol = "tcp" 2026-02-05 00:02:24.160943 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.160961 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 00:02:24.160971 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 00:02:24.160980 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 00:02:24.160990 | orchestrator | + security_group_id = (known after apply) 2026-02-05 00:02:24.161000 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.161009 | orchestrator | } 2026-02-05 00:02:24.161019 | orchestrator | 2026-02-05 00:02:24.161028 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-05 00:02:24.161044 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-05 00:02:24.161054 | orchestrator | + description = "wireguard" 2026-02-05 00:02:24.161063 | orchestrator | + direction = "ingress" 2026-02-05 00:02:24.161072 | orchestrator | + ethertype = "IPv4" 2026-02-05 00:02:24.161082 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.161092 | orchestrator | + port_range_max = 51820 2026-02-05 00:02:24.161101 | orchestrator | + port_range_min = 51820 2026-02-05 00:02:24.161110 | orchestrator | + protocol = "udp" 2026-02-05 00:02:24.161120 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.161129 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 00:02:24.161139 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 00:02:24.161148 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 00:02:24.161158 | orchestrator | + security_group_id = (known after apply) 2026-02-05 00:02:24.161167 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.161177 | orchestrator | } 2026-02-05 00:02:24.161186 | orchestrator | 2026-02-05 00:02:24.161196 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-05 00:02:24.161205 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-05 00:02:24.161215 | orchestrator | + direction = "ingress" 2026-02-05 00:02:24.161224 | orchestrator | + ethertype = "IPv4" 2026-02-05 00:02:24.161234 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.161243 | orchestrator | + protocol = "tcp" 2026-02-05 00:02:24.161253 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.161262 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 00:02:24.161272 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 00:02:24.161281 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-05 00:02:24.161291 | orchestrator | + security_group_id = (known after apply) 2026-02-05 00:02:24.161300 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.161309 | orchestrator | } 2026-02-05 00:02:24.161319 | orchestrator | 2026-02-05 00:02:24.161329 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-05 00:02:24.161338 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-05 00:02:24.161348 | orchestrator | + direction = "ingress" 2026-02-05 00:02:24.161357 | orchestrator | + ethertype = "IPv4" 2026-02-05 00:02:24.161384 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.161394 | orchestrator | + protocol = "udp" 2026-02-05 00:02:24.161403 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.161413 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 00:02:24.161422 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 00:02:24.161432 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-05 00:02:24.161441 | orchestrator | + security_group_id = (known after apply) 2026-02-05 00:02:24.161451 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.161460 | orchestrator | } 2026-02-05 00:02:24.161470 | orchestrator | 2026-02-05 00:02:24.161479 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-05 00:02:24.161497 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-05 00:02:24.161507 | orchestrator | + direction = "ingress" 2026-02-05 00:02:24.161517 | orchestrator | + ethertype = "IPv4" 2026-02-05 00:02:24.161526 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.161535 | orchestrator | + protocol = "icmp" 2026-02-05 00:02:24.161545 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.161555 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 00:02:24.161564 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 00:02:24.161574 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 00:02:24.161584 | orchestrator | + security_group_id = (known after apply) 2026-02-05 00:02:24.161593 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.161603 | orchestrator | } 2026-02-05 00:02:24.161612 | orchestrator | 2026-02-05 00:02:24.161622 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-05 00:02:24.161631 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-05 00:02:24.161641 | orchestrator | + direction = "ingress" 2026-02-05 00:02:24.161650 | orchestrator | + ethertype = "IPv4" 2026-02-05 00:02:24.161660 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.161670 | orchestrator | + protocol = "tcp" 2026-02-05 00:02:24.161679 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.161689 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 00:02:24.161703 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 00:02:24.161713 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 00:02:24.161722 | orchestrator | + security_group_id = (known after apply) 2026-02-05 00:02:24.161732 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.161742 | orchestrator | } 2026-02-05 00:02:24.161751 | orchestrator | 2026-02-05 00:02:24.161761 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-05 00:02:24.161770 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-05 00:02:24.161780 | orchestrator | + direction = "ingress" 2026-02-05 00:02:24.161789 | orchestrator | + ethertype = "IPv4" 2026-02-05 00:02:24.161799 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.161808 | orchestrator | + protocol = "udp" 2026-02-05 00:02:24.161818 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.161827 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 00:02:24.161837 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 00:02:24.161846 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 00:02:24.161856 | orchestrator | + security_group_id = (known after apply) 2026-02-05 00:02:24.161865 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.161875 | orchestrator | } 2026-02-05 00:02:24.161884 | orchestrator | 2026-02-05 00:02:24.161900 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-05 00:02:24.161910 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-05 00:02:24.161920 | orchestrator | + direction = "ingress" 2026-02-05 00:02:24.161934 | orchestrator | + ethertype = "IPv4" 2026-02-05 00:02:24.161944 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.161953 | orchestrator | + protocol = "icmp" 2026-02-05 00:02:24.161963 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.161972 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 00:02:24.161981 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 00:02:24.161991 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 00:02:24.162001 | orchestrator | + security_group_id = (known after apply) 2026-02-05 00:02:24.162010 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.162051 | orchestrator | } 2026-02-05 00:02:24.162061 | orchestrator | 2026-02-05 00:02:24.162071 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-05 00:02:24.162081 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-05 00:02:24.162090 | orchestrator | + description = "vrrp" 2026-02-05 00:02:24.162100 | orchestrator | + direction = "ingress" 2026-02-05 00:02:24.162110 | orchestrator | + ethertype = "IPv4" 2026-02-05 00:02:24.162119 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.162129 | orchestrator | + protocol = "112" 2026-02-05 00:02:24.162138 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.162148 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-05 00:02:24.162157 | orchestrator | + remote_group_id = (known after apply) 2026-02-05 00:02:24.162167 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-05 00:02:24.162176 | orchestrator | + security_group_id = (known after apply) 2026-02-05 00:02:24.162186 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.162196 | orchestrator | } 2026-02-05 00:02:24.162205 | orchestrator | 2026-02-05 00:02:24.162220 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-05 00:02:24.162238 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-05 00:02:24.162253 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.162263 | orchestrator | + description = "management security group" 2026-02-05 00:02:24.162273 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.162282 | orchestrator | + name = "testbed-management" 2026-02-05 00:02:24.162292 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.162301 | orchestrator | + stateful = (known after apply) 2026-02-05 00:02:24.162311 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.162320 | orchestrator | } 2026-02-05 00:02:24.162329 | orchestrator | 2026-02-05 00:02:24.162339 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-05 00:02:24.162348 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-05 00:02:24.162358 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.162407 | orchestrator | + description = "node security group" 2026-02-05 00:02:24.162417 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.162426 | orchestrator | + name = "testbed-node" 2026-02-05 00:02:24.162436 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.162445 | orchestrator | + stateful = (known after apply) 2026-02-05 00:02:24.162455 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.162464 | orchestrator | } 2026-02-05 00:02:24.162473 | orchestrator | 2026-02-05 00:02:24.162483 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-05 00:02:24.162493 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-05 00:02:24.162502 | orchestrator | + all_tags = (known after apply) 2026-02-05 00:02:24.162512 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-05 00:02:24.162521 | orchestrator | + dns_nameservers = [ 2026-02-05 00:02:24.162531 | orchestrator | + "8.8.8.8", 2026-02-05 00:02:24.162540 | orchestrator | + "9.9.9.9", 2026-02-05 00:02:24.162550 | orchestrator | ] 2026-02-05 00:02:24.162560 | orchestrator | + enable_dhcp = true 2026-02-05 00:02:24.162569 | orchestrator | + gateway_ip = (known after apply) 2026-02-05 00:02:24.162579 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.162589 | orchestrator | + ip_version = 4 2026-02-05 00:02:24.162598 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-05 00:02:24.162608 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-05 00:02:24.162617 | orchestrator | + name = "subnet-testbed-management" 2026-02-05 00:02:24.162627 | orchestrator | + network_id = (known after apply) 2026-02-05 00:02:24.162636 | orchestrator | + no_gateway = false 2026-02-05 00:02:24.162647 | orchestrator | + region = (known after apply) 2026-02-05 00:02:24.162665 | orchestrator | + service_types = (known after apply) 2026-02-05 00:02:24.162681 | orchestrator | + tenant_id = (known after apply) 2026-02-05 00:02:24.162691 | orchestrator | 2026-02-05 00:02:24.162700 | orchestrator | + allocation_pool { 2026-02-05 00:02:24.162710 | orchestrator | + end = "192.168.31.250" 2026-02-05 00:02:24.162720 | orchestrator | + start = "192.168.31.200" 2026-02-05 00:02:24.162727 | orchestrator | } 2026-02-05 00:02:24.162735 | orchestrator | } 2026-02-05 00:02:24.162743 | orchestrator | 2026-02-05 00:02:24.162751 | orchestrator | # terraform_data.image will be created 2026-02-05 00:02:24.162759 | orchestrator | + resource "terraform_data" "image" { 2026-02-05 00:02:24.162767 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.162774 | orchestrator | + input = "Ubuntu 24.04" 2026-02-05 00:02:24.162782 | orchestrator | + output = (known after apply) 2026-02-05 00:02:24.162790 | orchestrator | } 2026-02-05 00:02:24.162798 | orchestrator | 2026-02-05 00:02:24.162805 | orchestrator | # terraform_data.image_node will be created 2026-02-05 00:02:24.162813 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-05 00:02:24.162821 | orchestrator | + id = (known after apply) 2026-02-05 00:02:24.162828 | orchestrator | + input = "Ubuntu 24.04" 2026-02-05 00:02:24.162836 | orchestrator | + output = (known after apply) 2026-02-05 00:02:24.162844 | orchestrator | } 2026-02-05 00:02:24.162852 | orchestrator | 2026-02-05 00:02:24.162859 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-05 00:02:24.162867 | orchestrator | 2026-02-05 00:02:24.162875 | orchestrator | Changes to Outputs: 2026-02-05 00:02:24.162883 | orchestrator | + manager_address = (sensitive value) 2026-02-05 00:02:24.162891 | orchestrator | + private_key = (sensitive value) 2026-02-05 00:02:24.162898 | orchestrator | terraform_data.image: Creating... 2026-02-05 00:02:24.162906 | orchestrator | terraform_data.image: Creation complete after 0s [id=73a6fde4-3da0-90b7-56f7-5c168576a49b] 2026-02-05 00:02:24.162914 | orchestrator | terraform_data.image_node: Creating... 2026-02-05 00:02:24.162922 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=7b982359-592f-7ab1-419e-24d67e48b8af] 2026-02-05 00:02:24.162929 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-05 00:02:24.162942 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-05 00:02:24.162951 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-05 00:02:24.162958 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-05 00:02:24.162966 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-05 00:02:24.162974 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-05 00:02:24.162981 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-05 00:02:24.162989 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-05 00:02:24.162997 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-05 00:02:24.163004 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-05 00:02:24.163012 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-05 00:02:24.163021 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-05 00:02:24.163029 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-05 00:02:24.163036 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-05 00:02:24.163044 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-02-05 00:02:24.163052 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-05 00:02:25.356224 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 2s [id=72357bfc-f03f-4309-b922-c9dda8962779] 2026-02-05 00:02:25.366437 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-05 00:02:27.154007 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=5c16bdfb-9776-4282-a52f-d0746538d24f] 2026-02-05 00:02:27.161141 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-05 00:02:27.186113 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=df9dffbb-fa4a-4614-acfc-458aacc61e85] 2026-02-05 00:02:27.200837 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=ba105820-b7fd-4d06-b751-3e65d5700a2c] 2026-02-05 00:02:27.200930 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=66450c46-76da-4fbd-b0f3-00f2a07ceccd] 2026-02-05 00:02:27.203339 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-05 00:02:27.207552 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-05 00:02:27.208664 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-05 00:02:27.212791 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=7eff1655d8cfb60a7003418d712ddf3833a86303] 2026-02-05 00:02:27.220150 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-05 00:02:27.251341 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=65c05e60-3149-4d51-82d7-128e0fd85726] 2026-02-05 00:02:27.258099 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-05 00:02:27.262149 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=b7bd6d63-837c-4716-bacc-a146e68be59b] 2026-02-05 00:02:27.264961 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=ea1b8944-91e5-47d3-baee-befb07fac7f3] 2026-02-05 00:02:27.272179 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-05 00:02:27.277691 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-05 00:02:27.282161 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=bca111e6fae2461b67175bcea047555d1805be12] 2026-02-05 00:02:27.290810 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-05 00:02:27.304448 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=13446d2e-9611-4725-bf6d-ec20aba1d1c7] 2026-02-05 00:02:27.332052 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=36110d5e-3998-4d39-b163-f137840d584a] 2026-02-05 00:02:28.285426 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=42c0680d-eb04-4370-98ca-663de88ceb29] 2026-02-05 00:02:28.295792 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-05 00:02:28.790695 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=af379b57-ce7c-4d6b-83ee-e923da65a4fb] 2026-02-05 00:02:30.603110 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=7e282df8-56f1-48b7-aab2-50ed79008a58] 2026-02-05 00:02:30.642928 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=c136665a-9242-437c-80b5-efc7e2d18f11] 2026-02-05 00:02:30.656128 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=9c9a958b-8321-4f20-9f5f-ff253bc6c7cb] 2026-02-05 00:02:30.680592 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=3d892bf8-de40-4598-8b0b-6c2cde83153b] 2026-02-05 00:02:30.759850 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6] 2026-02-05 00:02:30.769482 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=bf2657f2-50ae-42ce-bd69-1f7fd81f5d96] 2026-02-05 00:02:32.733725 | orchestrator | openstack_networking_router_v2.router: Creation complete after 5s [id=eb3678d0-5048-4664-a2c0-810dbe1c4444] 2026-02-05 00:02:32.740310 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-05 00:02:32.741037 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-05 00:02:32.744527 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-05 00:02:32.970006 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=b87432fd-b83b-4d0d-8835-5b5cfbbde543] 2026-02-05 00:02:32.989875 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-05 00:02:32.990658 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-05 00:02:32.991287 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-05 00:02:32.991888 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-05 00:02:32.994672 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-05 00:02:32.995355 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-05 00:02:32.995546 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-05 00:02:32.996391 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-05 00:02:33.024280 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=48d0732e-5625-4ecd-a665-363ae7e21374] 2026-02-05 00:02:33.032055 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-05 00:02:33.242279 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=fa177685-86c2-4006-a00e-c55a0765ad57] 2026-02-05 00:02:33.254574 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-05 00:02:33.482479 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=5855821a-3f24-4778-87e6-a8f558017a75] 2026-02-05 00:02:33.489692 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-05 00:02:33.719425 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=77c5aead-7251-41c5-85f1-5038de09bb00] 2026-02-05 00:02:33.726900 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-05 00:02:33.871810 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=1cdbc315-f768-4f6c-9274-db0c1ccb7a53] 2026-02-05 00:02:33.877950 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-05 00:02:34.013009 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=208a4c6c-b3cf-418a-845a-0b7969fa91e0] 2026-02-05 00:02:34.028188 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-05 00:02:34.033208 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=6f615b29-8d6c-4b66-8447-1a75ac915ad4] 2026-02-05 00:02:34.037316 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-05 00:02:34.088892 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=f08a75da-f76d-49c0-95d9-0314a65e618d] 2026-02-05 00:02:34.092248 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=2e247d79-5ce6-4ed8-babc-08f23e1b6aa1] 2026-02-05 00:02:34.093751 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=1e2cfd0d-5692-451c-b0f7-c08f35ea579f] 2026-02-05 00:02:34.094261 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-05 00:02:34.159142 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=ecd1e86e-df94-45ac-baeb-fdb51869f985] 2026-02-05 00:02:34.306166 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=37219c90-1ad3-475e-891b-a01abeabb354] 2026-02-05 00:02:34.323646 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=5133524c-3b9b-4fcc-850b-998a47da5580] 2026-02-05 00:02:34.558810 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=f292d46d-495d-41b5-ac85-dec23987fe7e] 2026-02-05 00:02:34.821619 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=16099c71-610b-473b-bb84-83634479a5a9] 2026-02-05 00:02:34.925267 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=d9bb010a-218b-494a-b302-af31ae66f7b8] 2026-02-05 00:02:35.280902 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=0cd99f2e-81b2-4879-9fe7-8d94e4f5d3eb] 2026-02-05 00:02:36.584593 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=103d05cc-c5ae-46c6-97d6-61b217d2b38c] 2026-02-05 00:02:36.607974 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-05 00:02:36.620533 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-05 00:02:36.621174 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-05 00:02:36.626563 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-05 00:02:36.637489 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-05 00:02:36.637587 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-05 00:02:36.651068 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-05 00:02:38.281122 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=61634c88-ff3e-4081-a02a-f5ae5833c49b] 2026-02-05 00:02:38.291904 | orchestrator | local_file.inventory: Creating... 2026-02-05 00:02:38.294518 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-05 00:02:38.295422 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-05 00:02:38.298131 | orchestrator | local_file.inventory: Creation complete after 0s [id=dd59b28471a4b6946372026de63f47a80792c7c6] 2026-02-05 00:02:38.299451 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=5509cb9be5f8d55e8fec22d0e47564563ae0d859] 2026-02-05 00:02:39.362690 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=61634c88-ff3e-4081-a02a-f5ae5833c49b] 2026-02-05 00:02:46.625584 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-05 00:02:46.625747 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-05 00:02:46.631852 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-05 00:02:46.642272 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-05 00:02:46.642329 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-05 00:02:46.652273 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-05 00:02:56.635124 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-05 00:02:56.912879 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-05 00:02:56.912987 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-05 00:02:56.913002 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-05 00:02:56.913014 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-05 00:02:56.913025 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-05 00:03:06.644580 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-02-05 00:03:06.644733 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-02-05 00:03:06.644776 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-02-05 00:03:06.644790 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-02-05 00:03:06.644801 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-02-05 00:03:06.653071 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-02-05 00:03:16.650412 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-02-05 00:03:16.650525 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-02-05 00:03:16.650547 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-02-05 00:03:16.650555 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-02-05 00:03:16.650562 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-02-05 00:03:16.653596 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-02-05 00:03:17.425538 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 40s [id=f7e20b5c-4a66-401a-ba3b-90a26d6afaae] 2026-02-05 00:03:17.480777 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 40s [id=54dfa5c5-5a93-4118-bd36-9d45ee1c553a] 2026-02-05 00:03:17.491294 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 40s [id=4934d422-92a2-4b71-a4a9-bd5a80242b9e] 2026-02-05 00:03:17.535042 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=80f627fe-05ba-43fd-ae23-090c8f46de75] 2026-02-05 00:03:17.582490 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=93584630-d9e1-41a0-bf60-8ad534fe374b] 2026-02-05 00:03:26.650950 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-02-05 00:03:28.066248 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 51s [id=76339a50-886b-4be4-b273-459c46e97035] 2026-02-05 00:03:28.078967 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-05 00:03:28.111106 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-05 00:03:28.113807 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=8342279207132085611] 2026-02-05 00:03:28.118112 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-05 00:03:28.118205 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-05 00:03:28.118215 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-05 00:03:28.124079 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-05 00:03:28.147727 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-05 00:03:28.147946 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-05 00:03:28.162694 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-05 00:03:28.167904 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-05 00:03:28.188019 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-05 00:03:31.514422 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=93584630-d9e1-41a0-bf60-8ad534fe374b/13446d2e-9611-4725-bf6d-ec20aba1d1c7] 2026-02-05 00:03:31.522423 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=4934d422-92a2-4b71-a4a9-bd5a80242b9e/65c05e60-3149-4d51-82d7-128e0fd85726] 2026-02-05 00:03:31.560446 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=76339a50-886b-4be4-b273-459c46e97035/ba105820-b7fd-4d06-b751-3e65d5700a2c] 2026-02-05 00:03:31.582817 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=4934d422-92a2-4b71-a4a9-bd5a80242b9e/ea1b8944-91e5-47d3-baee-befb07fac7f3] 2026-02-05 00:03:31.594105 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=93584630-d9e1-41a0-bf60-8ad534fe374b/66450c46-76da-4fbd-b0f3-00f2a07ceccd] 2026-02-05 00:03:31.652756 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=76339a50-886b-4be4-b273-459c46e97035/36110d5e-3998-4d39-b163-f137840d584a] 2026-02-05 00:03:37.726196 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=4934d422-92a2-4b71-a4a9-bd5a80242b9e/b7bd6d63-837c-4716-bacc-a146e68be59b] 2026-02-05 00:03:37.728849 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=93584630-d9e1-41a0-bf60-8ad534fe374b/5c16bdfb-9776-4282-a52f-d0746538d24f] 2026-02-05 00:03:37.760947 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=76339a50-886b-4be4-b273-459c46e97035/df9dffbb-fa4a-4614-acfc-458aacc61e85] 2026-02-05 00:03:38.188565 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-05 00:03:48.189188 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-05 00:03:48.516045 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=c37a87a2-733e-499d-b13e-a5f4f9d6b861] 2026-02-05 00:03:48.534937 | orchestrator | 2026-02-05 00:03:48.535024 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-05 00:03:48.535080 | orchestrator | 2026-02-05 00:03:48.535094 | orchestrator | Outputs: 2026-02-05 00:03:48.535107 | orchestrator | 2026-02-05 00:03:48.535145 | orchestrator | manager_address = 2026-02-05 00:03:48.535158 | orchestrator | private_key = 2026-02-05 00:03:48.957284 | orchestrator | ok: Runtime: 0:01:30.716125 2026-02-05 00:03:48.986596 | 2026-02-05 00:03:48.986737 | TASK [Create infrastructure (stable)] 2026-02-05 00:03:49.522404 | orchestrator | skipping: Conditional result was False 2026-02-05 00:03:49.539624 | 2026-02-05 00:03:49.539866 | TASK [Fetch manager address] 2026-02-05 00:03:50.006501 | orchestrator | ok 2026-02-05 00:03:50.013677 | 2026-02-05 00:03:50.013801 | TASK [Set manager_host address] 2026-02-05 00:03:50.103502 | orchestrator | ok 2026-02-05 00:03:50.114213 | 2026-02-05 00:03:50.114422 | LOOP [Update ansible collections] 2026-02-05 00:03:52.839166 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-05 00:03:52.839548 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-05 00:03:52.839606 | orchestrator | Starting galaxy collection install process 2026-02-05 00:03:52.839642 | orchestrator | Process install dependency map 2026-02-05 00:03:52.839674 | orchestrator | Starting collection install process 2026-02-05 00:03:52.839703 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2026-02-05 00:03:52.839737 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2026-02-05 00:03:52.839781 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-05 00:03:52.839860 | orchestrator | ok: Item: commons Runtime: 0:00:02.310577 2026-02-05 00:03:54.089284 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-05 00:03:54.089445 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-05 00:03:54.089495 | orchestrator | Starting galaxy collection install process 2026-02-05 00:03:54.089533 | orchestrator | Process install dependency map 2026-02-05 00:03:54.089569 | orchestrator | Starting collection install process 2026-02-05 00:03:54.089603 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2026-02-05 00:03:54.089637 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2026-02-05 00:03:54.089669 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-05 00:03:54.089721 | orchestrator | ok: Item: services Runtime: 0:00:00.964063 2026-02-05 00:03:54.104260 | 2026-02-05 00:03:54.104385 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-05 00:04:04.713516 | orchestrator | ok 2026-02-05 00:04:04.723316 | 2026-02-05 00:04:04.723505 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-05 00:05:04.767610 | orchestrator | ok 2026-02-05 00:05:04.778452 | 2026-02-05 00:05:04.778583 | TASK [Fetch manager ssh hostkey] 2026-02-05 00:05:06.355774 | orchestrator | Output suppressed because no_log was given 2026-02-05 00:05:06.371515 | 2026-02-05 00:05:06.371703 | TASK [Get ssh keypair from terraform environment] 2026-02-05 00:05:06.911625 | orchestrator | ok: Runtime: 0:00:00.006860 2026-02-05 00:05:06.928869 | 2026-02-05 00:05:06.929043 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-05 00:05:06.977205 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-05 00:05:06.987771 | 2026-02-05 00:05:06.987912 | TASK [Run manager part 0] 2026-02-05 00:05:08.125595 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-05 00:05:08.278912 | orchestrator | 2026-02-05 00:05:08.278960 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-05 00:05:08.278967 | orchestrator | 2026-02-05 00:05:08.278981 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-05 00:05:10.153095 | orchestrator | ok: [testbed-manager] 2026-02-05 00:05:10.153147 | orchestrator | 2026-02-05 00:05:10.153170 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-05 00:05:10.153179 | orchestrator | 2026-02-05 00:05:10.153188 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 00:05:12.082091 | orchestrator | ok: [testbed-manager] 2026-02-05 00:05:12.082138 | orchestrator | 2026-02-05 00:05:12.082148 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-05 00:05:12.805237 | orchestrator | ok: [testbed-manager] 2026-02-05 00:05:12.805283 | orchestrator | 2026-02-05 00:05:12.805290 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-05 00:05:12.849253 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:05:12.849298 | orchestrator | 2026-02-05 00:05:12.849307 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-05 00:05:12.882056 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:05:12.882097 | orchestrator | 2026-02-05 00:05:12.882104 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-05 00:05:12.914469 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:05:12.914596 | orchestrator | 2026-02-05 00:05:12.914608 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-05 00:05:12.948242 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:05:12.948289 | orchestrator | 2026-02-05 00:05:12.948298 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-05 00:05:12.982471 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:05:12.982518 | orchestrator | 2026-02-05 00:05:12.982530 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-05 00:05:13.016464 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:05:13.016511 | orchestrator | 2026-02-05 00:05:13.016519 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-05 00:05:13.048655 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:05:13.048894 | orchestrator | 2026-02-05 00:05:13.048916 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-05 00:05:13.771043 | orchestrator | changed: [testbed-manager] 2026-02-05 00:05:13.771104 | orchestrator | 2026-02-05 00:05:13.771112 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-05 00:07:48.001306 | orchestrator | changed: [testbed-manager] 2026-02-05 00:07:48.001349 | orchestrator | 2026-02-05 00:07:48.001356 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-05 00:10:59.284332 | orchestrator | changed: [testbed-manager] 2026-02-05 00:10:59.284381 | orchestrator | 2026-02-05 00:10:59.284390 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-05 00:11:21.991718 | orchestrator | changed: [testbed-manager] 2026-02-05 00:11:21.991805 | orchestrator | 2026-02-05 00:11:21.991822 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-05 00:11:32.019265 | orchestrator | changed: [testbed-manager] 2026-02-05 00:11:32.019339 | orchestrator | 2026-02-05 00:11:32.019349 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-05 00:11:32.068363 | orchestrator | ok: [testbed-manager] 2026-02-05 00:11:32.068425 | orchestrator | 2026-02-05 00:11:32.068436 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-05 00:11:32.873371 | orchestrator | ok: [testbed-manager] 2026-02-05 00:11:32.873454 | orchestrator | 2026-02-05 00:11:32.873469 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-05 00:11:33.627666 | orchestrator | changed: [testbed-manager] 2026-02-05 00:11:33.627749 | orchestrator | 2026-02-05 00:11:33.627764 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-05 00:11:39.974827 | orchestrator | changed: [testbed-manager] 2026-02-05 00:11:39.974919 | orchestrator | 2026-02-05 00:11:39.974959 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-05 00:11:46.050656 | orchestrator | changed: [testbed-manager] 2026-02-05 00:11:46.050749 | orchestrator | 2026-02-05 00:11:46.050769 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-05 00:11:48.677821 | orchestrator | changed: [testbed-manager] 2026-02-05 00:11:48.677922 | orchestrator | 2026-02-05 00:11:48.677939 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-05 00:11:50.484033 | orchestrator | changed: [testbed-manager] 2026-02-05 00:11:50.484083 | orchestrator | 2026-02-05 00:11:50.484094 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-05 00:11:51.690216 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-05 00:11:51.690291 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-05 00:11:51.690302 | orchestrator | 2026-02-05 00:11:51.690312 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-05 00:11:51.735173 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-05 00:11:51.735243 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-05 00:11:51.735255 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-05 00:11:51.735266 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-05 00:12:00.311505 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-05 00:12:00.311607 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-05 00:12:00.311623 | orchestrator | 2026-02-05 00:12:00.311635 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-05 00:12:00.927783 | orchestrator | changed: [testbed-manager] 2026-02-05 00:12:00.927869 | orchestrator | 2026-02-05 00:12:00.927887 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-05 00:13:21.225965 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-05 00:13:21.226100 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-05 00:13:21.226120 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-05 00:13:21.226131 | orchestrator | 2026-02-05 00:13:21.226142 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-05 00:13:23.636577 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-05 00:13:23.636632 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-05 00:13:23.636638 | orchestrator | 2026-02-05 00:13:23.636643 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-05 00:13:23.636648 | orchestrator | 2026-02-05 00:13:23.636652 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 00:13:25.075941 | orchestrator | ok: [testbed-manager] 2026-02-05 00:13:25.076020 | orchestrator | 2026-02-05 00:13:25.076036 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-05 00:13:25.122803 | orchestrator | ok: [testbed-manager] 2026-02-05 00:13:25.122892 | orchestrator | 2026-02-05 00:13:25.122909 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-05 00:13:25.205293 | orchestrator | ok: [testbed-manager] 2026-02-05 00:13:25.205400 | orchestrator | 2026-02-05 00:13:25.205424 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-05 00:13:26.069919 | orchestrator | changed: [testbed-manager] 2026-02-05 00:13:26.070008 | orchestrator | 2026-02-05 00:13:26.070088 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-05 00:13:26.858328 | orchestrator | changed: [testbed-manager] 2026-02-05 00:13:26.858403 | orchestrator | 2026-02-05 00:13:26.858415 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-05 00:13:28.294459 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-05 00:13:28.294498 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-05 00:13:28.294505 | orchestrator | 2026-02-05 00:13:28.294519 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-05 00:13:29.768293 | orchestrator | changed: [testbed-manager] 2026-02-05 00:13:29.768370 | orchestrator | 2026-02-05 00:13:29.768383 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-05 00:13:31.447105 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 00:13:31.447203 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-05 00:13:31.447218 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-05 00:13:31.447230 | orchestrator | 2026-02-05 00:13:31.447243 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-05 00:13:31.503546 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:13:31.503656 | orchestrator | 2026-02-05 00:13:31.503683 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-05 00:13:31.584271 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:13:31.584357 | orchestrator | 2026-02-05 00:13:31.584375 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-05 00:13:32.150272 | orchestrator | changed: [testbed-manager] 2026-02-05 00:13:32.150360 | orchestrator | 2026-02-05 00:13:32.150376 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-05 00:13:32.225025 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:13:32.225127 | orchestrator | 2026-02-05 00:13:32.225155 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-05 00:13:33.141644 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-05 00:13:33.141736 | orchestrator | changed: [testbed-manager] 2026-02-05 00:13:33.141753 | orchestrator | 2026-02-05 00:13:33.141765 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-05 00:13:33.181137 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:13:33.181288 | orchestrator | 2026-02-05 00:13:33.181307 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-05 00:13:33.219076 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:13:33.219166 | orchestrator | 2026-02-05 00:13:33.219181 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-05 00:13:33.259191 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:13:33.259315 | orchestrator | 2026-02-05 00:13:33.259335 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-05 00:13:33.323542 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:13:33.323621 | orchestrator | 2026-02-05 00:13:33.323635 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-05 00:13:34.019383 | orchestrator | ok: [testbed-manager] 2026-02-05 00:13:34.019488 | orchestrator | 2026-02-05 00:13:34.019514 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-05 00:13:34.019534 | orchestrator | 2026-02-05 00:13:34.019546 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 00:13:35.561276 | orchestrator | ok: [testbed-manager] 2026-02-05 00:13:35.562099 | orchestrator | 2026-02-05 00:13:35.562118 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-05 00:13:36.668813 | orchestrator | changed: [testbed-manager] 2026-02-05 00:13:36.668878 | orchestrator | 2026-02-05 00:13:36.668893 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:13:36.668906 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-05 00:13:36.668917 | orchestrator | 2026-02-05 00:13:36.838086 | orchestrator | ok: Runtime: 0:08:29.476437 2026-02-05 00:13:36.850552 | 2026-02-05 00:13:36.850698 | TASK [Point out that the log in on the manager is now possible] 2026-02-05 00:13:36.886104 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-05 00:13:36.894894 | 2026-02-05 00:13:36.895019 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-05 00:13:36.943372 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-05 00:13:36.954026 | 2026-02-05 00:13:36.954162 | TASK [Run manager part 1 + 2] 2026-02-05 00:13:38.335291 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-05 00:13:38.399983 | orchestrator | 2026-02-05 00:13:38.400031 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-05 00:13:38.400038 | orchestrator | 2026-02-05 00:13:38.400051 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 00:13:41.546644 | orchestrator | ok: [testbed-manager] 2026-02-05 00:13:41.546744 | orchestrator | 2026-02-05 00:13:41.546804 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-05 00:13:41.588903 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:13:41.588965 | orchestrator | 2026-02-05 00:13:41.588979 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-05 00:13:41.639354 | orchestrator | ok: [testbed-manager] 2026-02-05 00:13:41.639457 | orchestrator | 2026-02-05 00:13:41.639484 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-05 00:13:41.684061 | orchestrator | ok: [testbed-manager] 2026-02-05 00:13:41.684123 | orchestrator | 2026-02-05 00:13:41.684135 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-05 00:13:41.758078 | orchestrator | ok: [testbed-manager] 2026-02-05 00:13:41.758136 | orchestrator | 2026-02-05 00:13:41.758148 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-05 00:13:41.843709 | orchestrator | ok: [testbed-manager] 2026-02-05 00:13:41.843759 | orchestrator | 2026-02-05 00:13:41.843768 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-05 00:13:41.896123 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-05 00:13:41.896165 | orchestrator | 2026-02-05 00:13:41.896171 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-05 00:13:42.617284 | orchestrator | ok: [testbed-manager] 2026-02-05 00:13:42.617864 | orchestrator | 2026-02-05 00:13:42.617886 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-05 00:13:42.672515 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:13:42.672579 | orchestrator | 2026-02-05 00:13:42.672592 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-05 00:13:44.144796 | orchestrator | changed: [testbed-manager] 2026-02-05 00:13:44.144892 | orchestrator | 2026-02-05 00:13:44.144913 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-05 00:13:44.744955 | orchestrator | ok: [testbed-manager] 2026-02-05 00:13:44.745042 | orchestrator | 2026-02-05 00:13:44.745059 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-05 00:13:45.927346 | orchestrator | changed: [testbed-manager] 2026-02-05 00:13:45.927461 | orchestrator | 2026-02-05 00:13:45.927490 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-05 00:14:00.425373 | orchestrator | changed: [testbed-manager] 2026-02-05 00:14:00.425451 | orchestrator | 2026-02-05 00:14:00.425458 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-05 00:14:01.083736 | orchestrator | ok: [testbed-manager] 2026-02-05 00:14:01.083789 | orchestrator | 2026-02-05 00:14:01.083796 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-05 00:14:01.142799 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:14:01.142849 | orchestrator | 2026-02-05 00:14:01.142855 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-05 00:14:02.109575 | orchestrator | changed: [testbed-manager] 2026-02-05 00:14:02.109695 | orchestrator | 2026-02-05 00:14:02.109713 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-05 00:14:03.057627 | orchestrator | changed: [testbed-manager] 2026-02-05 00:14:03.057706 | orchestrator | 2026-02-05 00:14:03.057722 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-05 00:14:03.638864 | orchestrator | changed: [testbed-manager] 2026-02-05 00:14:03.638934 | orchestrator | 2026-02-05 00:14:03.638947 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-05 00:14:03.691940 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-05 00:14:03.692143 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-05 00:14:03.692158 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-05 00:14:03.692165 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-05 00:14:06.515568 | orchestrator | changed: [testbed-manager] 2026-02-05 00:14:06.515732 | orchestrator | 2026-02-05 00:14:06.515748 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-05 00:14:14.667504 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-05 00:14:14.667607 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-05 00:14:14.667626 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-05 00:14:14.667640 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-05 00:14:14.667661 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-05 00:14:14.667673 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-05 00:14:14.667684 | orchestrator | 2026-02-05 00:14:14.667697 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-05 00:14:15.672870 | orchestrator | changed: [testbed-manager] 2026-02-05 00:14:15.673802 | orchestrator | 2026-02-05 00:14:15.674085 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-05 00:14:15.720157 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:14:15.720297 | orchestrator | 2026-02-05 00:14:15.720314 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-05 00:14:18.932518 | orchestrator | changed: [testbed-manager] 2026-02-05 00:14:18.932609 | orchestrator | 2026-02-05 00:14:18.932630 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-05 00:14:18.978135 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:14:18.978204 | orchestrator | 2026-02-05 00:14:18.978220 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-05 00:15:56.826568 | orchestrator | changed: [testbed-manager] 2026-02-05 00:15:56.826664 | orchestrator | 2026-02-05 00:15:56.826683 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-05 00:15:58.084092 | orchestrator | ok: [testbed-manager] 2026-02-05 00:15:58.084900 | orchestrator | 2026-02-05 00:15:58.084982 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:15:58.084999 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-05 00:15:58.085011 | orchestrator | 2026-02-05 00:15:58.575812 | orchestrator | ok: Runtime: 0:02:20.895748 2026-02-05 00:15:58.593422 | 2026-02-05 00:15:58.593561 | TASK [Reboot manager] 2026-02-05 00:16:00.131057 | orchestrator | ok: Runtime: 0:00:00.968017 2026-02-05 00:16:00.149402 | 2026-02-05 00:16:00.149562 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-05 00:16:16.537926 | orchestrator | ok 2026-02-05 00:16:16.548740 | 2026-02-05 00:16:16.548907 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-05 00:17:16.603915 | orchestrator | ok 2026-02-05 00:17:16.614108 | 2026-02-05 00:17:16.614248 | TASK [Deploy manager + bootstrap nodes] 2026-02-05 00:17:18.944396 | orchestrator | 2026-02-05 00:17:18.944589 | orchestrator | # DEPLOY MANAGER 2026-02-05 00:17:18.944615 | orchestrator | 2026-02-05 00:17:18.944631 | orchestrator | + set -e 2026-02-05 00:17:18.944645 | orchestrator | + echo 2026-02-05 00:17:18.944660 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-05 00:17:18.944677 | orchestrator | + echo 2026-02-05 00:17:18.944727 | orchestrator | + cat /opt/manager-vars.sh 2026-02-05 00:17:18.947445 | orchestrator | export NUMBER_OF_NODES=6 2026-02-05 00:17:18.947486 | orchestrator | 2026-02-05 00:17:18.947536 | orchestrator | export CEPH_VERSION=reef 2026-02-05 00:17:18.947553 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-05 00:17:18.947566 | orchestrator | export MANAGER_VERSION=latest 2026-02-05 00:17:18.947590 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-05 00:17:18.947602 | orchestrator | 2026-02-05 00:17:18.947620 | orchestrator | export ARA=false 2026-02-05 00:17:18.947632 | orchestrator | export DEPLOY_MODE=manager 2026-02-05 00:17:18.947649 | orchestrator | export TEMPEST=true 2026-02-05 00:17:18.947661 | orchestrator | export IS_ZUUL=true 2026-02-05 00:17:18.947672 | orchestrator | 2026-02-05 00:17:18.947690 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-02-05 00:17:18.947702 | orchestrator | export EXTERNAL_API=false 2026-02-05 00:17:18.947713 | orchestrator | 2026-02-05 00:17:18.947723 | orchestrator | export IMAGE_USER=ubuntu 2026-02-05 00:17:18.947738 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-05 00:17:18.947748 | orchestrator | 2026-02-05 00:17:18.947759 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-05 00:17:18.947778 | orchestrator | 2026-02-05 00:17:18.947789 | orchestrator | + echo 2026-02-05 00:17:18.947802 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 00:17:18.948532 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 00:17:18.948554 | orchestrator | ++ INTERACTIVE=false 2026-02-05 00:17:18.948568 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 00:17:18.948617 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 00:17:18.948746 | orchestrator | + source /opt/manager-vars.sh 2026-02-05 00:17:18.948787 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-05 00:17:18.948825 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-05 00:17:18.948838 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-05 00:17:18.948877 | orchestrator | ++ CEPH_VERSION=reef 2026-02-05 00:17:18.948890 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-05 00:17:18.948954 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-05 00:17:18.948973 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-05 00:17:18.948984 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-05 00:17:18.948996 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-05 00:17:18.949016 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-05 00:17:18.949028 | orchestrator | ++ export ARA=false 2026-02-05 00:17:18.949040 | orchestrator | ++ ARA=false 2026-02-05 00:17:18.949051 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-05 00:17:18.949062 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-05 00:17:18.949091 | orchestrator | ++ export TEMPEST=true 2026-02-05 00:17:18.949102 | orchestrator | ++ TEMPEST=true 2026-02-05 00:17:18.949113 | orchestrator | ++ export IS_ZUUL=true 2026-02-05 00:17:18.949124 | orchestrator | ++ IS_ZUUL=true 2026-02-05 00:17:18.949135 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-02-05 00:17:18.949146 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-02-05 00:17:18.949157 | orchestrator | ++ export EXTERNAL_API=false 2026-02-05 00:17:18.949168 | orchestrator | ++ EXTERNAL_API=false 2026-02-05 00:17:18.949178 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-05 00:17:18.949189 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-05 00:17:18.949200 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-05 00:17:18.949211 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-05 00:17:18.949222 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-05 00:17:18.949233 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-05 00:17:18.949248 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-05 00:17:18.987339 | orchestrator | + docker version 2026-02-05 00:17:19.108845 | orchestrator | Client: Docker Engine - Community 2026-02-05 00:17:19.108947 | orchestrator | Version: 27.5.1 2026-02-05 00:17:19.108964 | orchestrator | API version: 1.47 2026-02-05 00:17:19.108978 | orchestrator | Go version: go1.22.11 2026-02-05 00:17:19.108989 | orchestrator | Git commit: 9f9e405 2026-02-05 00:17:19.109000 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-05 00:17:19.109012 | orchestrator | OS/Arch: linux/amd64 2026-02-05 00:17:19.109023 | orchestrator | Context: default 2026-02-05 00:17:19.109034 | orchestrator | 2026-02-05 00:17:19.109046 | orchestrator | Server: Docker Engine - Community 2026-02-05 00:17:19.109057 | orchestrator | Engine: 2026-02-05 00:17:19.109118 | orchestrator | Version: 27.5.1 2026-02-05 00:17:19.109133 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-05 00:17:19.109174 | orchestrator | Go version: go1.22.11 2026-02-05 00:17:19.109186 | orchestrator | Git commit: 4c9b3b0 2026-02-05 00:17:19.109197 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-05 00:17:19.109207 | orchestrator | OS/Arch: linux/amd64 2026-02-05 00:17:19.109218 | orchestrator | Experimental: false 2026-02-05 00:17:19.109229 | orchestrator | containerd: 2026-02-05 00:17:19.109240 | orchestrator | Version: v2.2.1 2026-02-05 00:17:19.109251 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-05 00:17:19.109262 | orchestrator | runc: 2026-02-05 00:17:19.109273 | orchestrator | Version: 1.3.4 2026-02-05 00:17:19.109284 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-05 00:17:19.109295 | orchestrator | docker-init: 2026-02-05 00:17:19.109305 | orchestrator | Version: 0.19.0 2026-02-05 00:17:19.109317 | orchestrator | GitCommit: de40ad0 2026-02-05 00:17:19.111745 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-05 00:17:19.119880 | orchestrator | + set -e 2026-02-05 00:17:19.119962 | orchestrator | + source /opt/manager-vars.sh 2026-02-05 00:17:19.119978 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-05 00:17:19.119989 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-05 00:17:19.119999 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-05 00:17:19.120008 | orchestrator | ++ CEPH_VERSION=reef 2026-02-05 00:17:19.120017 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-05 00:17:19.120027 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-05 00:17:19.120036 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-05 00:17:19.120045 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-05 00:17:19.120054 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-05 00:17:19.120062 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-05 00:17:19.120094 | orchestrator | ++ export ARA=false 2026-02-05 00:17:19.120103 | orchestrator | ++ ARA=false 2026-02-05 00:17:19.120112 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-05 00:17:19.120121 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-05 00:17:19.120130 | orchestrator | ++ export TEMPEST=true 2026-02-05 00:17:19.120139 | orchestrator | ++ TEMPEST=true 2026-02-05 00:17:19.120147 | orchestrator | ++ export IS_ZUUL=true 2026-02-05 00:17:19.120156 | orchestrator | ++ IS_ZUUL=true 2026-02-05 00:17:19.120165 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-02-05 00:17:19.120174 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-02-05 00:17:19.120182 | orchestrator | ++ export EXTERNAL_API=false 2026-02-05 00:17:19.120191 | orchestrator | ++ EXTERNAL_API=false 2026-02-05 00:17:19.120199 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-05 00:17:19.120208 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-05 00:17:19.120217 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-05 00:17:19.120226 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-05 00:17:19.120235 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-05 00:17:19.120244 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-05 00:17:19.120252 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 00:17:19.120261 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 00:17:19.120270 | orchestrator | ++ INTERACTIVE=false 2026-02-05 00:17:19.120279 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 00:17:19.120291 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 00:17:19.120300 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-05 00:17:19.120318 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-05 00:17:19.120327 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-02-05 00:17:19.126583 | orchestrator | + set -e 2026-02-05 00:17:19.126620 | orchestrator | + VERSION=reef 2026-02-05 00:17:19.127924 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-02-05 00:17:19.132611 | orchestrator | + [[ -n ceph_version: reef ]] 2026-02-05 00:17:19.132658 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-02-05 00:17:19.138159 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-02-05 00:17:19.143809 | orchestrator | + set -e 2026-02-05 00:17:19.143874 | orchestrator | + VERSION=2024.2 2026-02-05 00:17:19.144841 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-02-05 00:17:19.147395 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-02-05 00:17:19.147440 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-02-05 00:17:19.152612 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-05 00:17:19.153653 | orchestrator | ++ semver latest 7.0.0 2026-02-05 00:17:19.208490 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-05 00:17:19.208592 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-05 00:17:19.208608 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-05 00:17:19.209609 | orchestrator | ++ semver latest 10.0.0-0 2026-02-05 00:17:19.265875 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-05 00:17:19.266546 | orchestrator | ++ semver 2024.2 2025.1 2026-02-05 00:17:19.311852 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-05 00:17:19.311959 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-05 00:17:19.401489 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-05 00:17:19.403779 | orchestrator | + source /opt/venv/bin/activate 2026-02-05 00:17:19.405184 | orchestrator | ++ deactivate nondestructive 2026-02-05 00:17:19.405267 | orchestrator | ++ '[' -n '' ']' 2026-02-05 00:17:19.405281 | orchestrator | ++ '[' -n '' ']' 2026-02-05 00:17:19.405294 | orchestrator | ++ hash -r 2026-02-05 00:17:19.405310 | orchestrator | ++ '[' -n '' ']' 2026-02-05 00:17:19.405321 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-05 00:17:19.405332 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-05 00:17:19.405350 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-05 00:17:19.405374 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-05 00:17:19.405386 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-05 00:17:19.405398 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-05 00:17:19.405409 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-05 00:17:19.405422 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 00:17:19.405434 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 00:17:19.405445 | orchestrator | ++ export PATH 2026-02-05 00:17:19.405456 | orchestrator | ++ '[' -n '' ']' 2026-02-05 00:17:19.405467 | orchestrator | ++ '[' -z '' ']' 2026-02-05 00:17:19.405479 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-05 00:17:19.405493 | orchestrator | ++ PS1='(venv) ' 2026-02-05 00:17:19.405587 | orchestrator | ++ export PS1 2026-02-05 00:17:19.405603 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-05 00:17:19.405616 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-05 00:17:19.405646 | orchestrator | ++ hash -r 2026-02-05 00:17:19.405698 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-05 00:17:20.595957 | orchestrator | 2026-02-05 00:17:20.596064 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-05 00:17:20.596121 | orchestrator | 2026-02-05 00:17:20.596134 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-05 00:17:21.173414 | orchestrator | ok: [testbed-manager] 2026-02-05 00:17:21.173544 | orchestrator | 2026-02-05 00:17:21.173563 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-05 00:17:22.123261 | orchestrator | changed: [testbed-manager] 2026-02-05 00:17:22.123377 | orchestrator | 2026-02-05 00:17:22.123400 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-05 00:17:22.123420 | orchestrator | 2026-02-05 00:17:22.123437 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 00:17:24.398400 | orchestrator | ok: [testbed-manager] 2026-02-05 00:17:24.398467 | orchestrator | 2026-02-05 00:17:24.398475 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-05 00:17:24.447844 | orchestrator | ok: [testbed-manager] 2026-02-05 00:17:24.447913 | orchestrator | 2026-02-05 00:17:24.447921 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-05 00:17:24.895571 | orchestrator | changed: [testbed-manager] 2026-02-05 00:17:24.895642 | orchestrator | 2026-02-05 00:17:24.895649 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-05 00:17:24.933144 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:17:24.933192 | orchestrator | 2026-02-05 00:17:24.933197 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-05 00:17:25.277246 | orchestrator | changed: [testbed-manager] 2026-02-05 00:17:25.277316 | orchestrator | 2026-02-05 00:17:25.277323 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-05 00:17:25.596347 | orchestrator | ok: [testbed-manager] 2026-02-05 00:17:25.596439 | orchestrator | 2026-02-05 00:17:25.596450 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-05 00:17:25.707424 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:17:25.707476 | orchestrator | 2026-02-05 00:17:25.707483 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-05 00:17:25.707488 | orchestrator | 2026-02-05 00:17:25.707493 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 00:17:27.450122 | orchestrator | ok: [testbed-manager] 2026-02-05 00:17:27.450230 | orchestrator | 2026-02-05 00:17:27.450248 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-05 00:17:27.550186 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-05 00:17:27.550287 | orchestrator | 2026-02-05 00:17:27.550304 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-05 00:17:27.607531 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-05 00:17:27.607632 | orchestrator | 2026-02-05 00:17:27.607649 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-05 00:17:28.802325 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-05 00:17:28.802428 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-05 00:17:28.802445 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-05 00:17:28.802457 | orchestrator | 2026-02-05 00:17:28.802470 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-05 00:17:30.638128 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-05 00:17:30.638225 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-05 00:17:30.638240 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-05 00:17:30.638252 | orchestrator | 2026-02-05 00:17:30.638265 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-05 00:17:31.272729 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-05 00:17:31.272827 | orchestrator | changed: [testbed-manager] 2026-02-05 00:17:31.272841 | orchestrator | 2026-02-05 00:17:31.272852 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-05 00:17:31.954968 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-05 00:17:31.955101 | orchestrator | changed: [testbed-manager] 2026-02-05 00:17:31.955120 | orchestrator | 2026-02-05 00:17:31.955133 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-05 00:17:32.013931 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:17:32.014125 | orchestrator | 2026-02-05 00:17:32.014146 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-05 00:17:32.365909 | orchestrator | ok: [testbed-manager] 2026-02-05 00:17:32.366000 | orchestrator | 2026-02-05 00:17:32.366013 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-05 00:17:32.433748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-05 00:17:32.433830 | orchestrator | 2026-02-05 00:17:32.433841 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-05 00:17:33.498340 | orchestrator | changed: [testbed-manager] 2026-02-05 00:17:33.498425 | orchestrator | 2026-02-05 00:17:33.498437 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-05 00:17:34.288134 | orchestrator | changed: [testbed-manager] 2026-02-05 00:17:34.288234 | orchestrator | 2026-02-05 00:17:34.288255 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-05 00:17:47.696222 | orchestrator | changed: [testbed-manager] 2026-02-05 00:17:47.696916 | orchestrator | 2026-02-05 00:17:47.696959 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-05 00:17:47.746003 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:17:47.746181 | orchestrator | 2026-02-05 00:17:47.746199 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-05 00:17:47.746213 | orchestrator | 2026-02-05 00:17:47.746224 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 00:17:49.549243 | orchestrator | ok: [testbed-manager] 2026-02-05 00:17:49.549427 | orchestrator | 2026-02-05 00:17:49.549478 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-05 00:17:49.651935 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-05 00:17:49.652023 | orchestrator | 2026-02-05 00:17:49.652036 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-05 00:17:49.716093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 00:17:49.716200 | orchestrator | 2026-02-05 00:17:49.716216 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-05 00:17:52.134402 | orchestrator | ok: [testbed-manager] 2026-02-05 00:17:52.134507 | orchestrator | 2026-02-05 00:17:52.134524 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-05 00:17:52.177248 | orchestrator | ok: [testbed-manager] 2026-02-05 00:17:52.177336 | orchestrator | 2026-02-05 00:17:52.177352 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-05 00:17:52.293837 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-05 00:17:52.294110 | orchestrator | 2026-02-05 00:17:52.294138 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-05 00:17:55.011400 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-05 00:17:55.011504 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-05 00:17:55.011520 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-05 00:17:55.011532 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-05 00:17:55.011543 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-05 00:17:55.011554 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-05 00:17:55.011566 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-05 00:17:55.011577 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-05 00:17:55.011588 | orchestrator | 2026-02-05 00:17:55.011602 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-05 00:17:55.640431 | orchestrator | changed: [testbed-manager] 2026-02-05 00:17:55.640532 | orchestrator | 2026-02-05 00:17:55.640549 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-05 00:17:56.323426 | orchestrator | changed: [testbed-manager] 2026-02-05 00:17:56.323531 | orchestrator | 2026-02-05 00:17:56.323547 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-05 00:17:56.403180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-05 00:17:56.403276 | orchestrator | 2026-02-05 00:17:56.403292 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-05 00:17:57.642976 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-05 00:17:57.643133 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-05 00:17:57.643149 | orchestrator | 2026-02-05 00:17:57.643159 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-05 00:17:58.274306 | orchestrator | changed: [testbed-manager] 2026-02-05 00:17:58.274382 | orchestrator | 2026-02-05 00:17:58.274394 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-05 00:17:58.332296 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:17:58.332340 | orchestrator | 2026-02-05 00:17:58.332349 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-05 00:17:58.413743 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-05 00:17:58.413822 | orchestrator | 2026-02-05 00:17:58.413832 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-05 00:17:59.024949 | orchestrator | changed: [testbed-manager] 2026-02-05 00:17:59.025032 | orchestrator | 2026-02-05 00:17:59.025088 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-05 00:17:59.073428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-05 00:17:59.073585 | orchestrator | 2026-02-05 00:17:59.073603 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-05 00:18:00.416714 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-05 00:18:00.416837 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-05 00:18:00.416853 | orchestrator | changed: [testbed-manager] 2026-02-05 00:18:00.416868 | orchestrator | 2026-02-05 00:18:00.416881 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-05 00:18:01.027780 | orchestrator | changed: [testbed-manager] 2026-02-05 00:18:01.027871 | orchestrator | 2026-02-05 00:18:01.027886 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-05 00:18:01.083920 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:18:01.084011 | orchestrator | 2026-02-05 00:18:01.084026 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-05 00:18:01.174960 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-05 00:18:01.175130 | orchestrator | 2026-02-05 00:18:01.175149 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-05 00:18:01.686303 | orchestrator | changed: [testbed-manager] 2026-02-05 00:18:01.686401 | orchestrator | 2026-02-05 00:18:01.686440 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-05 00:18:02.092497 | orchestrator | changed: [testbed-manager] 2026-02-05 00:18:02.092613 | orchestrator | 2026-02-05 00:18:02.092638 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-05 00:18:03.389994 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-05 00:18:03.390188 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-05 00:18:03.390206 | orchestrator | 2026-02-05 00:18:03.390218 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-05 00:18:04.052793 | orchestrator | changed: [testbed-manager] 2026-02-05 00:18:04.052924 | orchestrator | 2026-02-05 00:18:04.052950 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-05 00:18:04.433896 | orchestrator | ok: [testbed-manager] 2026-02-05 00:18:04.433983 | orchestrator | 2026-02-05 00:18:04.433995 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-05 00:18:04.813894 | orchestrator | changed: [testbed-manager] 2026-02-05 00:18:04.814130 | orchestrator | 2026-02-05 00:18:04.814152 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-05 00:18:04.854747 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:18:04.854838 | orchestrator | 2026-02-05 00:18:04.854856 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-05 00:18:04.920824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-05 00:18:04.920936 | orchestrator | 2026-02-05 00:18:04.920962 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-05 00:18:04.969525 | orchestrator | ok: [testbed-manager] 2026-02-05 00:18:04.969622 | orchestrator | 2026-02-05 00:18:04.969639 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-05 00:18:07.043511 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-05 00:18:07.043603 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-05 00:18:07.043619 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-05 00:18:07.043632 | orchestrator | 2026-02-05 00:18:07.043645 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-05 00:18:07.698109 | orchestrator | changed: [testbed-manager] 2026-02-05 00:18:07.698205 | orchestrator | 2026-02-05 00:18:07.698220 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-05 00:18:08.389265 | orchestrator | changed: [testbed-manager] 2026-02-05 00:18:08.389393 | orchestrator | 2026-02-05 00:18:08.389410 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-05 00:18:09.014130 | orchestrator | changed: [testbed-manager] 2026-02-05 00:18:09.014215 | orchestrator | 2026-02-05 00:18:09.014236 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-05 00:18:09.077337 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-05 00:18:09.077422 | orchestrator | 2026-02-05 00:18:09.077438 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-05 00:18:09.115221 | orchestrator | ok: [testbed-manager] 2026-02-05 00:18:09.115322 | orchestrator | 2026-02-05 00:18:09.115339 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-05 00:18:09.746740 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-05 00:18:09.746839 | orchestrator | 2026-02-05 00:18:09.746855 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-05 00:18:09.815545 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-05 00:18:09.815681 | orchestrator | 2026-02-05 00:18:09.815712 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-05 00:18:10.452754 | orchestrator | changed: [testbed-manager] 2026-02-05 00:18:10.452880 | orchestrator | 2026-02-05 00:18:10.452906 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-05 00:18:10.992123 | orchestrator | ok: [testbed-manager] 2026-02-05 00:18:10.992231 | orchestrator | 2026-02-05 00:18:10.992251 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-05 00:18:11.033831 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:18:11.033951 | orchestrator | 2026-02-05 00:18:11.033976 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-05 00:18:11.084192 | orchestrator | ok: [testbed-manager] 2026-02-05 00:18:11.084277 | orchestrator | 2026-02-05 00:18:11.084292 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-05 00:18:11.821661 | orchestrator | changed: [testbed-manager] 2026-02-05 00:18:11.821753 | orchestrator | 2026-02-05 00:18:11.821766 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-05 00:19:17.485498 | orchestrator | changed: [testbed-manager] 2026-02-05 00:19:17.485631 | orchestrator | 2026-02-05 00:19:17.485649 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-05 00:19:19.388215 | orchestrator | ok: [testbed-manager] 2026-02-05 00:19:19.388321 | orchestrator | 2026-02-05 00:19:19.388338 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-05 00:19:19.442495 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:19:19.442620 | orchestrator | 2026-02-05 00:19:19.442648 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-05 00:19:21.955800 | orchestrator | changed: [testbed-manager] 2026-02-05 00:19:21.955877 | orchestrator | 2026-02-05 00:19:21.955886 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-05 00:19:22.035096 | orchestrator | ok: [testbed-manager] 2026-02-05 00:19:22.035220 | orchestrator | 2026-02-05 00:19:22.035261 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-05 00:19:22.035276 | orchestrator | 2026-02-05 00:19:22.035287 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-05 00:19:22.090116 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:19:22.090197 | orchestrator | 2026-02-05 00:19:22.090209 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-05 00:20:22.133414 | orchestrator | Pausing for 60 seconds 2026-02-05 00:20:22.133540 | orchestrator | changed: [testbed-manager] 2026-02-05 00:20:22.133557 | orchestrator | 2026-02-05 00:20:22.133572 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-05 00:20:24.668458 | orchestrator | changed: [testbed-manager] 2026-02-05 00:20:24.668578 | orchestrator | 2026-02-05 00:20:24.668597 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-05 00:21:06.188891 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-05 00:21:06.189062 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-05 00:21:06.189081 | orchestrator | changed: [testbed-manager] 2026-02-05 00:21:06.189122 | orchestrator | 2026-02-05 00:21:06.189136 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-05 00:21:15.989779 | orchestrator | changed: [testbed-manager] 2026-02-05 00:21:15.989870 | orchestrator | 2026-02-05 00:21:15.989882 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-05 00:21:16.066795 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-05 00:21:16.066909 | orchestrator | 2026-02-05 00:21:16.066934 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-05 00:21:16.067030 | orchestrator | 2026-02-05 00:21:16.067046 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-05 00:21:16.109499 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:21:16.109590 | orchestrator | 2026-02-05 00:21:16.109605 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-05 00:21:16.172392 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-05 00:21:16.172486 | orchestrator | 2026-02-05 00:21:16.172500 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-05 00:21:16.905871 | orchestrator | changed: [testbed-manager] 2026-02-05 00:21:16.906121 | orchestrator | 2026-02-05 00:21:16.906144 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-05 00:21:20.050303 | orchestrator | ok: [testbed-manager] 2026-02-05 00:21:20.050383 | orchestrator | 2026-02-05 00:21:20.050392 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-05 00:21:20.110613 | orchestrator | ok: [testbed-manager] => { 2026-02-05 00:21:20.110714 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-05 00:21:20.110731 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-05 00:21:20.110743 | orchestrator | "Checking running containers against expected versions...", 2026-02-05 00:21:20.110756 | orchestrator | "", 2026-02-05 00:21:20.110770 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-05 00:21:20.110782 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-02-05 00:21:20.110793 | orchestrator | " Enabled: true", 2026-02-05 00:21:20.110804 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-02-05 00:21:20.110815 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:21:20.110827 | orchestrator | "", 2026-02-05 00:21:20.110838 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-05 00:21:20.110849 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-02-05 00:21:20.110860 | orchestrator | " Enabled: true", 2026-02-05 00:21:20.110870 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-02-05 00:21:20.110881 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:21:20.110892 | orchestrator | "", 2026-02-05 00:21:20.110903 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-05 00:21:20.110914 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-02-05 00:21:20.110925 | orchestrator | " Enabled: true", 2026-02-05 00:21:20.110936 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-02-05 00:21:20.110947 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:21:20.110957 | orchestrator | "", 2026-02-05 00:21:20.111058 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-05 00:21:20.111080 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-02-05 00:21:20.111098 | orchestrator | " Enabled: true", 2026-02-05 00:21:20.111115 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-02-05 00:21:20.111131 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:21:20.111148 | orchestrator | "", 2026-02-05 00:21:20.111167 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-05 00:21:20.111183 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-02-05 00:21:20.111232 | orchestrator | " Enabled: true", 2026-02-05 00:21:20.111252 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-02-05 00:21:20.111270 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:21:20.111290 | orchestrator | "", 2026-02-05 00:21:20.111310 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-05 00:21:20.111329 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-05 00:21:20.111348 | orchestrator | " Enabled: true", 2026-02-05 00:21:20.111368 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-05 00:21:20.111386 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:21:20.111404 | orchestrator | "", 2026-02-05 00:21:20.111421 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-05 00:21:20.111440 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-05 00:21:20.111460 | orchestrator | " Enabled: true", 2026-02-05 00:21:20.111479 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-05 00:21:20.111498 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:21:20.111513 | orchestrator | "", 2026-02-05 00:21:20.111524 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-05 00:21:20.111535 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-05 00:21:20.111546 | orchestrator | " Enabled: true", 2026-02-05 00:21:20.111556 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-05 00:21:20.111567 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:21:20.111578 | orchestrator | "", 2026-02-05 00:21:20.111599 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-05 00:21:20.111611 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-02-05 00:21:20.111626 | orchestrator | " Enabled: true", 2026-02-05 00:21:20.111637 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-02-05 00:21:20.111648 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:21:20.111659 | orchestrator | "", 2026-02-05 00:21:20.111670 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-05 00:21:20.111681 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-05 00:21:20.111692 | orchestrator | " Enabled: true", 2026-02-05 00:21:20.111703 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-05 00:21:20.111714 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:21:20.111724 | orchestrator | "", 2026-02-05 00:21:20.111735 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-05 00:21:20.111746 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-05 00:21:20.111757 | orchestrator | " Enabled: true", 2026-02-05 00:21:20.111768 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-05 00:21:20.111778 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:21:20.111789 | orchestrator | "", 2026-02-05 00:21:20.111800 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-05 00:21:20.111810 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-05 00:21:20.111821 | orchestrator | " Enabled: true", 2026-02-05 00:21:20.111832 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-05 00:21:20.111843 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:21:20.111853 | orchestrator | "", 2026-02-05 00:21:20.111864 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-05 00:21:20.111875 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-05 00:21:20.111886 | orchestrator | " Enabled: true", 2026-02-05 00:21:20.111896 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-05 00:21:20.111907 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:21:20.111918 | orchestrator | "", 2026-02-05 00:21:20.111928 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-05 00:21:20.111939 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-05 00:21:20.111950 | orchestrator | " Enabled: true", 2026-02-05 00:21:20.111961 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-05 00:21:20.112028 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:21:20.112048 | orchestrator | "", 2026-02-05 00:21:20.112060 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-05 00:21:20.112092 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-02-05 00:21:20.112103 | orchestrator | " Enabled: true", 2026-02-05 00:21:20.112114 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-02-05 00:21:20.112125 | orchestrator | " Status: ✅ MATCH", 2026-02-05 00:21:20.112136 | orchestrator | "", 2026-02-05 00:21:20.112146 | orchestrator | "=== Summary ===", 2026-02-05 00:21:20.112157 | orchestrator | "Errors (version mismatches): 0", 2026-02-05 00:21:20.112168 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-05 00:21:20.112179 | orchestrator | "", 2026-02-05 00:21:20.112190 | orchestrator | "✅ All running containers match expected versions!" 2026-02-05 00:21:20.112201 | orchestrator | ] 2026-02-05 00:21:20.112212 | orchestrator | } 2026-02-05 00:21:20.112223 | orchestrator | 2026-02-05 00:21:20.112235 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-05 00:21:20.174792 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:21:20.174879 | orchestrator | 2026-02-05 00:21:20.174893 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:21:20.174907 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-05 00:21:20.174919 | orchestrator | 2026-02-05 00:21:20.273947 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-05 00:21:20.274110 | orchestrator | + deactivate 2026-02-05 00:21:20.274125 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-05 00:21:20.274142 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-05 00:21:20.274153 | orchestrator | + export PATH 2026-02-05 00:21:20.274164 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-05 00:21:20.274177 | orchestrator | + '[' -n '' ']' 2026-02-05 00:21:20.274188 | orchestrator | + hash -r 2026-02-05 00:21:20.274198 | orchestrator | + '[' -n '' ']' 2026-02-05 00:21:20.274209 | orchestrator | + unset VIRTUAL_ENV 2026-02-05 00:21:20.274220 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-05 00:21:20.274231 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-05 00:21:20.274241 | orchestrator | + unset -f deactivate 2026-02-05 00:21:20.274253 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-05 00:21:20.282183 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-05 00:21:20.282266 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-05 00:21:20.282281 | orchestrator | + local max_attempts=60 2026-02-05 00:21:20.282295 | orchestrator | + local name=ceph-ansible 2026-02-05 00:21:20.282306 | orchestrator | + local attempt_num=1 2026-02-05 00:21:20.283076 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:21:20.317698 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:21:20.317770 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-05 00:21:20.317778 | orchestrator | + local max_attempts=60 2026-02-05 00:21:20.317787 | orchestrator | + local name=kolla-ansible 2026-02-05 00:21:20.317794 | orchestrator | + local attempt_num=1 2026-02-05 00:21:20.318278 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-05 00:21:20.346118 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:21:20.346243 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-05 00:21:20.346272 | orchestrator | + local max_attempts=60 2026-02-05 00:21:20.346296 | orchestrator | + local name=osism-ansible 2026-02-05 00:21:20.346315 | orchestrator | + local attempt_num=1 2026-02-05 00:21:20.346333 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-05 00:21:20.375359 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:21:20.375447 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-05 00:21:20.375462 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-05 00:21:21.048874 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-05 00:21:21.233133 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-05 00:21:21.233262 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-02-05 00:21:21.233280 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-02-05 00:21:21.233293 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-02-05 00:21:21.233306 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-02-05 00:21:21.233317 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-02-05 00:21:21.233328 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-02-05 00:21:21.233339 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 56 seconds (healthy) 2026-02-05 00:21:21.233366 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-02-05 00:21:21.233378 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-02-05 00:21:21.233389 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-02-05 00:21:21.233400 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-02-05 00:21:21.233411 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-02-05 00:21:21.233421 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-02-05 00:21:21.233433 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-02-05 00:21:21.233443 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-02-05 00:21:21.238893 | orchestrator | ++ semver latest 7.0.0 2026-02-05 00:21:21.283694 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-05 00:21:21.283787 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-05 00:21:21.283804 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-05 00:21:21.288053 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-05 00:21:33.413450 | orchestrator | 2026-02-05 00:21:33 | INFO  | Prepare task for execution of resolvconf. 2026-02-05 00:21:33.621748 | orchestrator | 2026-02-05 00:21:33 | INFO  | Task 8cd67f76-39f6-4933-9289-34e08ffa0055 (resolvconf) was prepared for execution. 2026-02-05 00:21:33.621834 | orchestrator | 2026-02-05 00:21:33 | INFO  | It takes a moment until task 8cd67f76-39f6-4933-9289-34e08ffa0055 (resolvconf) has been started and output is visible here. 2026-02-05 00:21:47.579373 | orchestrator | 2026-02-05 00:21:47.579481 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-05 00:21:47.579499 | orchestrator | 2026-02-05 00:21:47.579511 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 00:21:47.579523 | orchestrator | Thursday 05 February 2026 00:21:37 +0000 (0:00:00.130) 0:00:00.130 ***** 2026-02-05 00:21:47.579535 | orchestrator | ok: [testbed-manager] 2026-02-05 00:21:47.579546 | orchestrator | 2026-02-05 00:21:47.579558 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-05 00:21:47.579569 | orchestrator | Thursday 05 February 2026 00:21:41 +0000 (0:00:03.419) 0:00:03.550 ***** 2026-02-05 00:21:47.579580 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:21:47.579592 | orchestrator | 2026-02-05 00:21:47.579603 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-05 00:21:47.579614 | orchestrator | Thursday 05 February 2026 00:21:41 +0000 (0:00:00.062) 0:00:03.612 ***** 2026-02-05 00:21:47.579625 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-05 00:21:47.579637 | orchestrator | 2026-02-05 00:21:47.579649 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-05 00:21:47.579660 | orchestrator | Thursday 05 February 2026 00:21:41 +0000 (0:00:00.074) 0:00:03.687 ***** 2026-02-05 00:21:47.579683 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 00:21:47.579694 | orchestrator | 2026-02-05 00:21:47.579705 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-05 00:21:47.579716 | orchestrator | Thursday 05 February 2026 00:21:41 +0000 (0:00:00.065) 0:00:03.752 ***** 2026-02-05 00:21:47.579727 | orchestrator | ok: [testbed-manager] 2026-02-05 00:21:47.579738 | orchestrator | 2026-02-05 00:21:47.579749 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-05 00:21:47.579760 | orchestrator | Thursday 05 February 2026 00:21:42 +0000 (0:00:00.913) 0:00:04.666 ***** 2026-02-05 00:21:47.579771 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:21:47.579782 | orchestrator | 2026-02-05 00:21:47.579793 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-05 00:21:47.579804 | orchestrator | Thursday 05 February 2026 00:21:42 +0000 (0:00:00.052) 0:00:04.719 ***** 2026-02-05 00:21:47.579814 | orchestrator | ok: [testbed-manager] 2026-02-05 00:21:47.579825 | orchestrator | 2026-02-05 00:21:47.579836 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-05 00:21:47.579847 | orchestrator | Thursday 05 February 2026 00:21:42 +0000 (0:00:00.458) 0:00:05.177 ***** 2026-02-05 00:21:47.579858 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:21:47.579868 | orchestrator | 2026-02-05 00:21:47.579879 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-05 00:21:47.579891 | orchestrator | Thursday 05 February 2026 00:21:42 +0000 (0:00:00.081) 0:00:05.259 ***** 2026-02-05 00:21:47.579903 | orchestrator | changed: [testbed-manager] 2026-02-05 00:21:47.579917 | orchestrator | 2026-02-05 00:21:47.579929 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-05 00:21:47.579943 | orchestrator | Thursday 05 February 2026 00:21:43 +0000 (0:00:00.476) 0:00:05.735 ***** 2026-02-05 00:21:47.579956 | orchestrator | changed: [testbed-manager] 2026-02-05 00:21:47.579969 | orchestrator | 2026-02-05 00:21:47.579981 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-05 00:21:47.579995 | orchestrator | Thursday 05 February 2026 00:21:44 +0000 (0:00:00.957) 0:00:06.692 ***** 2026-02-05 00:21:47.580007 | orchestrator | ok: [testbed-manager] 2026-02-05 00:21:47.580051 | orchestrator | 2026-02-05 00:21:47.580087 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-05 00:21:47.580101 | orchestrator | Thursday 05 February 2026 00:21:46 +0000 (0:00:01.878) 0:00:08.571 ***** 2026-02-05 00:21:47.580113 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-05 00:21:47.580126 | orchestrator | 2026-02-05 00:21:47.580138 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-05 00:21:47.580151 | orchestrator | Thursday 05 February 2026 00:21:46 +0000 (0:00:00.089) 0:00:08.660 ***** 2026-02-05 00:21:47.580163 | orchestrator | changed: [testbed-manager] 2026-02-05 00:21:47.580176 | orchestrator | 2026-02-05 00:21:47.580189 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:21:47.580203 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 00:21:47.580214 | orchestrator | 2026-02-05 00:21:47.580225 | orchestrator | 2026-02-05 00:21:47.580235 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:21:47.580246 | orchestrator | Thursday 05 February 2026 00:21:47 +0000 (0:00:01.144) 0:00:09.804 ***** 2026-02-05 00:21:47.580257 | orchestrator | =============================================================================== 2026-02-05 00:21:47.580268 | orchestrator | Gathering Facts --------------------------------------------------------- 3.42s 2026-02-05 00:21:47.580278 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.88s 2026-02-05 00:21:47.580289 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.14s 2026-02-05 00:21:47.580300 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.96s 2026-02-05 00:21:47.580310 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.91s 2026-02-05 00:21:47.580321 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.48s 2026-02-05 00:21:47.580350 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.46s 2026-02-05 00:21:47.580362 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-02-05 00:21:47.580372 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-02-05 00:21:47.580383 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-02-05 00:21:47.580394 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-02-05 00:21:47.580405 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-02-05 00:21:47.580415 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-02-05 00:21:47.899531 | orchestrator | + osism apply sshconfig 2026-02-05 00:21:59.683122 | orchestrator | 2026-02-05 00:21:59 | INFO  | Prepare task for execution of sshconfig. 2026-02-05 00:21:59.750098 | orchestrator | 2026-02-05 00:21:59 | INFO  | Task 94e998c9-5711-4ab8-a038-361dda645ebd (sshconfig) was prepared for execution. 2026-02-05 00:21:59.750209 | orchestrator | 2026-02-05 00:21:59 | INFO  | It takes a moment until task 94e998c9-5711-4ab8-a038-361dda645ebd (sshconfig) has been started and output is visible here. 2026-02-05 00:22:10.230516 | orchestrator | 2026-02-05 00:22:10.230624 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-05 00:22:10.230643 | orchestrator | 2026-02-05 00:22:10.230655 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-05 00:22:10.230667 | orchestrator | Thursday 05 February 2026 00:22:03 +0000 (0:00:00.124) 0:00:00.124 ***** 2026-02-05 00:22:10.230679 | orchestrator | ok: [testbed-manager] 2026-02-05 00:22:10.230691 | orchestrator | 2026-02-05 00:22:10.230702 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-05 00:22:10.230712 | orchestrator | Thursday 05 February 2026 00:22:03 +0000 (0:00:00.474) 0:00:00.598 ***** 2026-02-05 00:22:10.230752 | orchestrator | changed: [testbed-manager] 2026-02-05 00:22:10.230764 | orchestrator | 2026-02-05 00:22:10.230775 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-05 00:22:10.230786 | orchestrator | Thursday 05 February 2026 00:22:04 +0000 (0:00:00.458) 0:00:01.057 ***** 2026-02-05 00:22:10.230796 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-05 00:22:10.230807 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-05 00:22:10.230818 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-05 00:22:10.230829 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-05 00:22:10.230840 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-05 00:22:10.230850 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-05 00:22:10.230861 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-05 00:22:10.230872 | orchestrator | 2026-02-05 00:22:10.230882 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-05 00:22:10.230893 | orchestrator | Thursday 05 February 2026 00:22:09 +0000 (0:00:05.035) 0:00:06.092 ***** 2026-02-05 00:22:10.230904 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:22:10.230915 | orchestrator | 2026-02-05 00:22:10.230926 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-05 00:22:10.230936 | orchestrator | Thursday 05 February 2026 00:22:09 +0000 (0:00:00.060) 0:00:06.153 ***** 2026-02-05 00:22:10.230947 | orchestrator | changed: [testbed-manager] 2026-02-05 00:22:10.230958 | orchestrator | 2026-02-05 00:22:10.230968 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:22:10.230981 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:22:10.230993 | orchestrator | 2026-02-05 00:22:10.231003 | orchestrator | 2026-02-05 00:22:10.231014 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:22:10.231025 | orchestrator | Thursday 05 February 2026 00:22:10 +0000 (0:00:00.511) 0:00:06.665 ***** 2026-02-05 00:22:10.231036 | orchestrator | =============================================================================== 2026-02-05 00:22:10.231048 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.04s 2026-02-05 00:22:10.231087 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.51s 2026-02-05 00:22:10.231100 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.47s 2026-02-05 00:22:10.231113 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.46s 2026-02-05 00:22:10.231125 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2026-02-05 00:22:10.550595 | orchestrator | + osism apply known-hosts 2026-02-05 00:22:22.686434 | orchestrator | 2026-02-05 00:22:22 | INFO  | Prepare task for execution of known-hosts. 2026-02-05 00:22:22.765633 | orchestrator | 2026-02-05 00:22:22 | INFO  | Task 3f3992e7-8758-4c6f-a2f1-332643366010 (known-hosts) was prepared for execution. 2026-02-05 00:22:22.765752 | orchestrator | 2026-02-05 00:22:22 | INFO  | It takes a moment until task 3f3992e7-8758-4c6f-a2f1-332643366010 (known-hosts) has been started and output is visible here. 2026-02-05 00:22:38.128099 | orchestrator | 2026-02-05 00:22:38.128233 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-05 00:22:38.128251 | orchestrator | 2026-02-05 00:22:38.128263 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-05 00:22:38.128275 | orchestrator | Thursday 05 February 2026 00:22:26 +0000 (0:00:00.142) 0:00:00.142 ***** 2026-02-05 00:22:38.128287 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-05 00:22:38.128299 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-05 00:22:38.128309 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-05 00:22:38.128346 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-05 00:22:38.128357 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-05 00:22:38.128368 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-05 00:22:38.128378 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-05 00:22:38.128389 | orchestrator | 2026-02-05 00:22:38.128400 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-05 00:22:38.128412 | orchestrator | Thursday 05 February 2026 00:22:32 +0000 (0:00:05.647) 0:00:05.790 ***** 2026-02-05 00:22:38.128451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-05 00:22:38.128467 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-05 00:22:38.128491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-05 00:22:38.128502 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-05 00:22:38.128513 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-05 00:22:38.128523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-05 00:22:38.128534 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-05 00:22:38.128545 | orchestrator | 2026-02-05 00:22:38.128556 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:22:38.128567 | orchestrator | Thursday 05 February 2026 00:22:32 +0000 (0:00:00.150) 0:00:05.940 ***** 2026-02-05 00:22:38.128581 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjDEBp65kQwXK9KaquT21SeXiPloaGUgMu+DqdFrqjmWYcnjCp70hh35xbLyN2MxG9q8L3TQW3wcjfHSgAF16eDH9yP4FaiU7T5MR01Cbq1/K+xHVJeZX+RjcYjJKfVvW+ZlrVH6CUayxH6wG9LM10jp4Ay3E4glDuYFhscLGrQfzUeG83I/tZb0lOSl4zbQMVAL02oHqftEfxRkMVj2oTMYK6Z4C4VwBH9SFMlxhPK88fl5m85mNrJgW0cpbmcy6M/4aUy+KcjLT3bfeqsV+R7CIMeSvZqyK78pQLpz19HXZvPL+3oQFSiDHy2IGPLs2t+HJ3G9tKt5ciamVWiIMgENWODFTHFlUCRfRMS1TkRMLjBhZe7lMNpJTHx6/HGcpUoWVrHS3u+K0UqSiubxJxBfPuvTihFGs/cYOsbwJE1KHRsTecE/AhfHt/TZq/y0CCl4PUxXMUJvQj9mHVMGMAF3qwkQ6KglX78R6YqjdUADV1GB+KtJTEVBMqsu4ssGM=) 2026-02-05 00:22:38.128597 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI9g0UZO7BgmWCiejhSuLSwKTl/onVzyXiJPhYsAo/TboTYYzrcQddTNid0G63NrcZpkQadQQzOFZph7teVpBIU=) 2026-02-05 00:22:38.128610 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIaBzU83qWYPxRAUIM3ZNZ4mSwbt8IVfEQVnOOZts294) 2026-02-05 00:22:38.128622 | orchestrator | 2026-02-05 00:22:38.128633 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:22:38.128644 | orchestrator | Thursday 05 February 2026 00:22:33 +0000 (0:00:01.055) 0:00:06.996 ***** 2026-02-05 00:22:38.128679 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCojIiQhsRRqVrt7AktubyOvQmMxgVX/VKN1NV8j75KbYREq7u4ym4TNFLmxlk8yIVx6SUIqQJKn74tTgTXuP1NjIksgAjkWKXNRiYQUfqcyEe0uAcSVm0z99a5rKB/7TEWASJv434LFxJUSqQa9hawXsRLV5q/QhqTsaABx1GN6lFW3WttPtLP3BqW8zrF77X9KWVFEXf8p0s2UPldPEyyRNzrPzQCXI8A4BkFNOaXd2DzGguo5GRfCtetZOUCuDYdW5Xeitt074duZ0cDVLO/agQDVgJx5aTL/Ym+RC9YVSYezEB4LhmozX5/l9hUahnuvteWaDRUCq/4xrAU0NOKl6pROP1pPEOu/mvmufv2O6cbaYKUVzVe6RcwNbvGl66LlcISXnUIzmKZZho2kh5RilEw+hBscCxImS16M7GClVgHZ0SXgbDULKRO5gmuFPQIjbpSr9Ncz/GnAJkNDhqsYp74YTYNHBaTAUxMC/PE+xwLMZfmwskIJMADv2qGRjU=) 2026-02-05 00:22:38.128700 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCKswHRrhUMJyuzvTy8yLrBq8EW2DEnCIVd9PFepNiPz/Tf4wQIIUPoXV79zNnb7QJxv3h6pvGDLSz8rKtpIvgI=) 2026-02-05 00:22:38.128712 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINH3UiaV/CeUdrXByoq24RQIBOpiAnylPjmfjXtsRSxe) 2026-02-05 00:22:38.128723 | orchestrator | 2026-02-05 00:22:38.128734 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:22:38.128745 | orchestrator | Thursday 05 February 2026 00:22:34 +0000 (0:00:00.982) 0:00:07.979 ***** 2026-02-05 00:22:38.128757 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrycZIm7en1AsT440HicWmkfL+5MXW0DxYG74vDjBi4TSL98FYx6hGS1b2ERSxbBOboi+v9tl0s8t+HLMHxQ+dXoL0gIhK8bLF+ifMm+TVr70a69QEM06bXERWcTBa/ZkdgLxBg8IoDL+/1vnVd97s7wlR5tdvu1n/2CgFFoDA31M/cS+Ip2OS6zbV2DCgQCtXxgkf7An9kAY3GPsIGyK8VO3QTzJ0GC/gm8lYZ4LSNLfAOAh00wNxpq/FA/bgrnVq+KG89FMWNH1wg9mqBbCdBsnkr8vOwJKOlng6NbstDLzw++ol9AF1lqj1ghpfWv61n3rYiXL2oOjd/7YWzpXdBKNdR2qtJP5hbuoxTyi1ATkH1VwmhPBzqWBGtKTgWInHBkqlkCBlMv4y7OIp6WQeAPzXf4tb/0nBdqwXW0da71a+1vRSNIkqe+mrAnZmFV41lMfInSYAvYrZxIoFGN8t4/UuVzGbH9iQJflLPBIbRPDEqciEOPoG/9UQe8BNi2M=) 2026-02-05 00:22:38.128769 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMbm6GgKukkt0CXBIgpHWsVfpCTx5uQjVgrf9QfxVpvHoIoIQxFuaMWphQkZLSpG4sEg6oVpM2OP26Z7V/32TTc=) 2026-02-05 00:22:38.128852 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEsdIyuIpa/6uCs/oGQqhmwFqAh8/H1eeqZOWiuNn/U5) 2026-02-05 00:22:38.128864 | orchestrator | 2026-02-05 00:22:38.128875 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:22:38.128886 | orchestrator | Thursday 05 February 2026 00:22:35 +0000 (0:00:01.035) 0:00:09.014 ***** 2026-02-05 00:22:38.128897 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG9Qbc44oDKxt97PW3tcS3E3rYzIDHqPWEuIt1q5nECkt6rXSyZYhuYI81ryUii3Fqn1YStG5sCnyH3YXoMhF90=) 2026-02-05 00:22:38.128909 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrJu8Zfb/iP/hmFil/9ZRBZTakRd96Ixl5rPrOXLQYfzMiOAMcNxf5qUP4K5BvBodnr9yJjkG1OLkjPmGiup8Nb2eayEnNOr1ld97jb9o63MuTKXKP28KzAw58ZxQVaMd9N0kyy7OnehpVITs+OQ8g45E9mi07wpCVqnBtp/9DblG9uDsk9LoL62ccF3y3mqC7jV33lIE8pjpfhb2e1kk6arJELyH5jAPFGbsqJVVdaj8FE0dzfhl0uEc4ySMrgxghX/rspOADkM8c8zTN1nZuwR+oJVS3mh4+dxz+OeM4cCTawVuMN2fOty93wFgr6rwOpJdHXqzcPxzUKw/ZydbzPNCXEeN2Tv+M1cx9wcUxTi0HXXoMn5Dgd95Daz2fxhZFRqVFhO4XBKcnPNriBxirDMgewkCxdj1iOhbNO3FO7AMafVuODpAxb/cJ5ls8Pgh2l9MiJsbFjhM2+QicVscjGbDFzfBRqkcJ9x1J4PNQ/SG4SdYgqV+0HKcJERtETJM=) 2026-02-05 00:22:38.128921 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILnHKNXT6BMouS3DmtJWYa1BREpxUCY2avGra9wGNsSL) 2026-02-05 00:22:38.128932 | orchestrator | 2026-02-05 00:22:38.128943 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:22:38.128953 | orchestrator | Thursday 05 February 2026 00:22:36 +0000 (0:00:01.054) 0:00:10.069 ***** 2026-02-05 00:22:38.128964 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCwnBvc8J1YEUpkYJqU+KNYpJqw64RjnkXqEIq/nRdY3AANiboOsknUV7jO4FvrnqTQqCsNcNdCSMaUyr5H5Yyc=) 2026-02-05 00:22:38.128976 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJbWn4wJ7KelYjY6xK2TuqCQ2CQEw09pNQHRUBQhBPRp) 2026-02-05 00:22:38.128993 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTUJ62pVkslcWiS5+aeyaBwylYqAFV1zBejkZk4spPvjPhYYRz0fU/Z0ma31iB2H9qMMJv+q7VQp4Y8ZPfKcLdkh1+uZc7AAnmDc5K5WvkZj3DzrCrZvDr4RFp35b5bjobd5Me+RHQnC3+bwIahCVnIgm15BwPiS++Il0mL4FhKt8IIiFEVCwjZq9iuWXsdP0HsCxBxiyCc+Z104+baZCgvewsYtddikuRKji0ld1wFe6AbCS5tN7KeIg1oS6j17nIishhITPVn78s3jg5bQIBl4hdLs8OHU2w1x+XPYLYKeK6R90vlxUoJwkoSo8T4zNTdf2By8R9AzGwKRJ3tXnvU8pByXXcGYOClzMtR/nlqweij1kP7pYNVlHGneG+3vKEvbOR9YlQQj90vweR3ny+MLNwKPEiJJpsyMvVhNLeCU5tfTwMmqqY3BO1aURZy1UUxceDsqQs2+T6Lzb7IJxzcjbfS/zjpvSaCj9iQb5ddvVi6JMtItJLJ79qwnhzZmc=) 2026-02-05 00:22:38.129005 | orchestrator | 2026-02-05 00:22:38.129016 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:22:38.129027 | orchestrator | Thursday 05 February 2026 00:22:37 +0000 (0:00:01.075) 0:00:11.145 ***** 2026-02-05 00:22:38.129047 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrIZIJuYfw/Q6s3wxrvnl5QJSj9gYPq6WGV+eKujhOrx69LQRnaD2Mcj8KJIUfed3WjKrSVigHg8EnEGjwi4xbL4q7bx0gyjcL4lJE+rjOEsUJyFvGtgT636LB1nVv1rkMylI4uVMwB9v7BQT9LfacnqRWqEyhpPSFyEze8apxnOKzd8VbJn/TGOP15m3iVuHpzharCxQIAd0DH2dYUmNKQvkmWH0ExqaV9mByFlbKO71xWJoX8U3eSmOjStRfxkzfHmcr2gcLNkI5RKKV2+F7sGnRL5szq9PZg0MzXWRVfquuVHonDedDDTzwk/4FgqVmtl+NaylrUyp+H4runTvGiXi63WQ7Rsud72hpu6geFBwJes3x3DR0NqHQWmHZn/aBRWh1kPcfiyEuXWeo5y/7W3aL25JZA6p12E59Nq474HB8Dz838ORVYKYTUWAuaorVntFgeX1w6X/avocd6lMK68DsfIiFCFdChGKsqpOODSxZVQQTx1yGirsrN39o/Xs=) 2026-02-05 00:22:49.038407 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJmWzxEUJhri2wdrZxlZaQR5PXv3M4GIFV+nlOffzB4VyQOIlNYM2NC0JAV9uD92xWPFYjqJvqqRzSB9+FXRhTg=) 2026-02-05 00:22:49.038513 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKhcbKyB4pcSsLHBV6XQPYtcZHBv6lMBF7qswvXxmHCL) 2026-02-05 00:22:49.038529 | orchestrator | 2026-02-05 00:22:49.038542 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:22:49.038555 | orchestrator | Thursday 05 February 2026 00:22:38 +0000 (0:00:01.054) 0:00:12.199 ***** 2026-02-05 00:22:49.038567 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOJkYTf0Ae576JXtHHaQYD2pHd09dV2IkMTAD0DG1pTKVEa/H2wOembKoFbPcJFwGDlc28KX4kDPRX2Z+dl89Js=) 2026-02-05 00:22:49.038581 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjB8Ok2+cco/zscHfiHQqbbnx1SuKIKrrOyYM9tGnwGKbs1vRc2XYVaJTJ+OaPcKHWtStOzKHK9qMcSrj23zpM9dx+8SgXqcgAUpFBihym5yXgBAMjgxQaw6YYDHxbTVcelOJYyD9qPQeJri3bWgDwVWsAWKVA/VD3rmxHvUpvKYCBN7cw2fgHuMGN6wf6fg2zYx6rA7DtKeMynJUlp85NLhJc8RKRYVqz2z9tuGsGzYKV1ZTCnb4B2lCZ9EN/PeTGpKe1bEh809AyKl4P9beRF94FDf9h3dJVlxNnNrmWwK1fuzhq629yxG2+gQJQ2VbaZ6qvznpvula9b9fAPQCtF1keRIIlIPbW2CZys6R1vncsFMowTIv3pd3QtJrmeQ8s4bYGqcetCBQIqxVKzvpojc/96zdb7klICHK+DAK6NjskxvCYeg0VB01u2nNkQZ4+qPpqHFRp9/WxZi++ry/RzFtRxMlJ7iSzI0qkKVpFdT/z9uFz5W45Ygzch7TDa4M=) 2026-02-05 00:22:49.038595 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICArtMqzZ7fE8PaAy/qTOH1myYkHh2sNCqEeZ10xTHpH) 2026-02-05 00:22:49.038606 | orchestrator | 2026-02-05 00:22:49.038617 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-05 00:22:49.038630 | orchestrator | Thursday 05 February 2026 00:22:39 +0000 (0:00:01.021) 0:00:13.220 ***** 2026-02-05 00:22:49.038642 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-05 00:22:49.038654 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-05 00:22:49.038665 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-05 00:22:49.038675 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-05 00:22:49.038686 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-05 00:22:49.038715 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-05 00:22:49.038727 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-05 00:22:49.038763 | orchestrator | 2026-02-05 00:22:49.038775 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-05 00:22:49.038787 | orchestrator | Thursday 05 February 2026 00:22:44 +0000 (0:00:05.125) 0:00:18.346 ***** 2026-02-05 00:22:49.038799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-05 00:22:49.038812 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-05 00:22:49.038823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-05 00:22:49.038834 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-05 00:22:49.038845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-05 00:22:49.038855 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-05 00:22:49.038866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-05 00:22:49.038877 | orchestrator | 2026-02-05 00:22:49.038888 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:22:49.038899 | orchestrator | Thursday 05 February 2026 00:22:45 +0000 (0:00:00.166) 0:00:18.512 ***** 2026-02-05 00:22:49.038910 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI9g0UZO7BgmWCiejhSuLSwKTl/onVzyXiJPhYsAo/TboTYYzrcQddTNid0G63NrcZpkQadQQzOFZph7teVpBIU=) 2026-02-05 00:22:49.038940 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjDEBp65kQwXK9KaquT21SeXiPloaGUgMu+DqdFrqjmWYcnjCp70hh35xbLyN2MxG9q8L3TQW3wcjfHSgAF16eDH9yP4FaiU7T5MR01Cbq1/K+xHVJeZX+RjcYjJKfVvW+ZlrVH6CUayxH6wG9LM10jp4Ay3E4glDuYFhscLGrQfzUeG83I/tZb0lOSl4zbQMVAL02oHqftEfxRkMVj2oTMYK6Z4C4VwBH9SFMlxhPK88fl5m85mNrJgW0cpbmcy6M/4aUy+KcjLT3bfeqsV+R7CIMeSvZqyK78pQLpz19HXZvPL+3oQFSiDHy2IGPLs2t+HJ3G9tKt5ciamVWiIMgENWODFTHFlUCRfRMS1TkRMLjBhZe7lMNpJTHx6/HGcpUoWVrHS3u+K0UqSiubxJxBfPuvTihFGs/cYOsbwJE1KHRsTecE/AhfHt/TZq/y0CCl4PUxXMUJvQj9mHVMGMAF3qwkQ6KglX78R6YqjdUADV1GB+KtJTEVBMqsu4ssGM=) 2026-02-05 00:22:49.038955 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIaBzU83qWYPxRAUIM3ZNZ4mSwbt8IVfEQVnOOZts294) 2026-02-05 00:22:49.038968 | orchestrator | 2026-02-05 00:22:49.038981 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:22:49.038994 | orchestrator | Thursday 05 February 2026 00:22:46 +0000 (0:00:01.026) 0:00:19.539 ***** 2026-02-05 00:22:49.039008 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINH3UiaV/CeUdrXByoq24RQIBOpiAnylPjmfjXtsRSxe) 2026-02-05 00:22:49.039022 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCojIiQhsRRqVrt7AktubyOvQmMxgVX/VKN1NV8j75KbYREq7u4ym4TNFLmxlk8yIVx6SUIqQJKn74tTgTXuP1NjIksgAjkWKXNRiYQUfqcyEe0uAcSVm0z99a5rKB/7TEWASJv434LFxJUSqQa9hawXsRLV5q/QhqTsaABx1GN6lFW3WttPtLP3BqW8zrF77X9KWVFEXf8p0s2UPldPEyyRNzrPzQCXI8A4BkFNOaXd2DzGguo5GRfCtetZOUCuDYdW5Xeitt074duZ0cDVLO/agQDVgJx5aTL/Ym+RC9YVSYezEB4LhmozX5/l9hUahnuvteWaDRUCq/4xrAU0NOKl6pROP1pPEOu/mvmufv2O6cbaYKUVzVe6RcwNbvGl66LlcISXnUIzmKZZho2kh5RilEw+hBscCxImS16M7GClVgHZ0SXgbDULKRO5gmuFPQIjbpSr9Ncz/GnAJkNDhqsYp74YTYNHBaTAUxMC/PE+xwLMZfmwskIJMADv2qGRjU=) 2026-02-05 00:22:49.039045 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCKswHRrhUMJyuzvTy8yLrBq8EW2DEnCIVd9PFepNiPz/Tf4wQIIUPoXV79zNnb7QJxv3h6pvGDLSz8rKtpIvgI=) 2026-02-05 00:22:49.039056 | orchestrator | 2026-02-05 00:22:49.039067 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:22:49.039078 | orchestrator | Thursday 05 February 2026 00:22:47 +0000 (0:00:01.005) 0:00:20.545 ***** 2026-02-05 00:22:49.039089 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEsdIyuIpa/6uCs/oGQqhmwFqAh8/H1eeqZOWiuNn/U5) 2026-02-05 00:22:49.039101 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrycZIm7en1AsT440HicWmkfL+5MXW0DxYG74vDjBi4TSL98FYx6hGS1b2ERSxbBOboi+v9tl0s8t+HLMHxQ+dXoL0gIhK8bLF+ifMm+TVr70a69QEM06bXERWcTBa/ZkdgLxBg8IoDL+/1vnVd97s7wlR5tdvu1n/2CgFFoDA31M/cS+Ip2OS6zbV2DCgQCtXxgkf7An9kAY3GPsIGyK8VO3QTzJ0GC/gm8lYZ4LSNLfAOAh00wNxpq/FA/bgrnVq+KG89FMWNH1wg9mqBbCdBsnkr8vOwJKOlng6NbstDLzw++ol9AF1lqj1ghpfWv61n3rYiXL2oOjd/7YWzpXdBKNdR2qtJP5hbuoxTyi1ATkH1VwmhPBzqWBGtKTgWInHBkqlkCBlMv4y7OIp6WQeAPzXf4tb/0nBdqwXW0da71a+1vRSNIkqe+mrAnZmFV41lMfInSYAvYrZxIoFGN8t4/UuVzGbH9iQJflLPBIbRPDEqciEOPoG/9UQe8BNi2M=) 2026-02-05 00:22:49.039113 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMbm6GgKukkt0CXBIgpHWsVfpCTx5uQjVgrf9QfxVpvHoIoIQxFuaMWphQkZLSpG4sEg6oVpM2OP26Z7V/32TTc=) 2026-02-05 00:22:49.039155 | orchestrator | 2026-02-05 00:22:49.039166 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:22:49.039177 | orchestrator | Thursday 05 February 2026 00:22:48 +0000 (0:00:00.987) 0:00:21.533 ***** 2026-02-05 00:22:49.039195 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrJu8Zfb/iP/hmFil/9ZRBZTakRd96Ixl5rPrOXLQYfzMiOAMcNxf5qUP4K5BvBodnr9yJjkG1OLkjPmGiup8Nb2eayEnNOr1ld97jb9o63MuTKXKP28KzAw58ZxQVaMd9N0kyy7OnehpVITs+OQ8g45E9mi07wpCVqnBtp/9DblG9uDsk9LoL62ccF3y3mqC7jV33lIE8pjpfhb2e1kk6arJELyH5jAPFGbsqJVVdaj8FE0dzfhl0uEc4ySMrgxghX/rspOADkM8c8zTN1nZuwR+oJVS3mh4+dxz+OeM4cCTawVuMN2fOty93wFgr6rwOpJdHXqzcPxzUKw/ZydbzPNCXEeN2Tv+M1cx9wcUxTi0HXXoMn5Dgd95Daz2fxhZFRqVFhO4XBKcnPNriBxirDMgewkCxdj1iOhbNO3FO7AMafVuODpAxb/cJ5ls8Pgh2l9MiJsbFjhM2+QicVscjGbDFzfBRqkcJ9x1J4PNQ/SG4SdYgqV+0HKcJERtETJM=) 2026-02-05 00:22:49.039207 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG9Qbc44oDKxt97PW3tcS3E3rYzIDHqPWEuIt1q5nECkt6rXSyZYhuYI81ryUii3Fqn1YStG5sCnyH3YXoMhF90=) 2026-02-05 00:22:49.039231 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILnHKNXT6BMouS3DmtJWYa1BREpxUCY2avGra9wGNsSL) 2026-02-05 00:22:53.184966 | orchestrator | 2026-02-05 00:22:53.185073 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:22:53.185091 | orchestrator | Thursday 05 February 2026 00:22:49 +0000 (0:00:00.920) 0:00:22.453 ***** 2026-02-05 00:22:53.185105 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCwnBvc8J1YEUpkYJqU+KNYpJqw64RjnkXqEIq/nRdY3AANiboOsknUV7jO4FvrnqTQqCsNcNdCSMaUyr5H5Yyc=) 2026-02-05 00:22:53.185179 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTUJ62pVkslcWiS5+aeyaBwylYqAFV1zBejkZk4spPvjPhYYRz0fU/Z0ma31iB2H9qMMJv+q7VQp4Y8ZPfKcLdkh1+uZc7AAnmDc5K5WvkZj3DzrCrZvDr4RFp35b5bjobd5Me+RHQnC3+bwIahCVnIgm15BwPiS++Il0mL4FhKt8IIiFEVCwjZq9iuWXsdP0HsCxBxiyCc+Z104+baZCgvewsYtddikuRKji0ld1wFe6AbCS5tN7KeIg1oS6j17nIishhITPVn78s3jg5bQIBl4hdLs8OHU2w1x+XPYLYKeK6R90vlxUoJwkoSo8T4zNTdf2By8R9AzGwKRJ3tXnvU8pByXXcGYOClzMtR/nlqweij1kP7pYNVlHGneG+3vKEvbOR9YlQQj90vweR3ny+MLNwKPEiJJpsyMvVhNLeCU5tfTwMmqqY3BO1aURZy1UUxceDsqQs2+T6Lzb7IJxzcjbfS/zjpvSaCj9iQb5ddvVi6JMtItJLJ79qwnhzZmc=) 2026-02-05 00:22:53.185221 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJbWn4wJ7KelYjY6xK2TuqCQ2CQEw09pNQHRUBQhBPRp) 2026-02-05 00:22:53.185234 | orchestrator | 2026-02-05 00:22:53.185245 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:22:53.185256 | orchestrator | Thursday 05 February 2026 00:22:50 +0000 (0:00:00.930) 0:00:23.384 ***** 2026-02-05 00:22:53.185267 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJmWzxEUJhri2wdrZxlZaQR5PXv3M4GIFV+nlOffzB4VyQOIlNYM2NC0JAV9uD92xWPFYjqJvqqRzSB9+FXRhTg=) 2026-02-05 00:22:53.185279 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrIZIJuYfw/Q6s3wxrvnl5QJSj9gYPq6WGV+eKujhOrx69LQRnaD2Mcj8KJIUfed3WjKrSVigHg8EnEGjwi4xbL4q7bx0gyjcL4lJE+rjOEsUJyFvGtgT636LB1nVv1rkMylI4uVMwB9v7BQT9LfacnqRWqEyhpPSFyEze8apxnOKzd8VbJn/TGOP15m3iVuHpzharCxQIAd0DH2dYUmNKQvkmWH0ExqaV9mByFlbKO71xWJoX8U3eSmOjStRfxkzfHmcr2gcLNkI5RKKV2+F7sGnRL5szq9PZg0MzXWRVfquuVHonDedDDTzwk/4FgqVmtl+NaylrUyp+H4runTvGiXi63WQ7Rsud72hpu6geFBwJes3x3DR0NqHQWmHZn/aBRWh1kPcfiyEuXWeo5y/7W3aL25JZA6p12E59Nq474HB8Dz838ORVYKYTUWAuaorVntFgeX1w6X/avocd6lMK68DsfIiFCFdChGKsqpOODSxZVQQTx1yGirsrN39o/Xs=) 2026-02-05 00:22:53.185291 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKhcbKyB4pcSsLHBV6XQPYtcZHBv6lMBF7qswvXxmHCL) 2026-02-05 00:22:53.185302 | orchestrator | 2026-02-05 00:22:53.185313 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-05 00:22:53.185324 | orchestrator | Thursday 05 February 2026 00:22:50 +0000 (0:00:00.954) 0:00:24.339 ***** 2026-02-05 00:22:53.185335 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOJkYTf0Ae576JXtHHaQYD2pHd09dV2IkMTAD0DG1pTKVEa/H2wOembKoFbPcJFwGDlc28KX4kDPRX2Z+dl89Js=) 2026-02-05 00:22:53.185347 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjB8Ok2+cco/zscHfiHQqbbnx1SuKIKrrOyYM9tGnwGKbs1vRc2XYVaJTJ+OaPcKHWtStOzKHK9qMcSrj23zpM9dx+8SgXqcgAUpFBihym5yXgBAMjgxQaw6YYDHxbTVcelOJYyD9qPQeJri3bWgDwVWsAWKVA/VD3rmxHvUpvKYCBN7cw2fgHuMGN6wf6fg2zYx6rA7DtKeMynJUlp85NLhJc8RKRYVqz2z9tuGsGzYKV1ZTCnb4B2lCZ9EN/PeTGpKe1bEh809AyKl4P9beRF94FDf9h3dJVlxNnNrmWwK1fuzhq629yxG2+gQJQ2VbaZ6qvznpvula9b9fAPQCtF1keRIIlIPbW2CZys6R1vncsFMowTIv3pd3QtJrmeQ8s4bYGqcetCBQIqxVKzvpojc/96zdb7klICHK+DAK6NjskxvCYeg0VB01u2nNkQZ4+qPpqHFRp9/WxZi++ry/RzFtRxMlJ7iSzI0qkKVpFdT/z9uFz5W45Ygzch7TDa4M=) 2026-02-05 00:22:53.185359 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICArtMqzZ7fE8PaAy/qTOH1myYkHh2sNCqEeZ10xTHpH) 2026-02-05 00:22:53.185370 | orchestrator | 2026-02-05 00:22:53.185381 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-05 00:22:53.185392 | orchestrator | Thursday 05 February 2026 00:22:51 +0000 (0:00:01.034) 0:00:25.373 ***** 2026-02-05 00:22:53.185403 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-05 00:22:53.185415 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-05 00:22:53.185426 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-05 00:22:53.185437 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-05 00:22:53.185448 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-05 00:22:53.185459 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-05 00:22:53.185470 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-05 00:22:53.185481 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:22:53.185492 | orchestrator | 2026-02-05 00:22:53.185523 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-05 00:22:53.185536 | orchestrator | Thursday 05 February 2026 00:22:52 +0000 (0:00:00.173) 0:00:25.547 ***** 2026-02-05 00:22:53.185557 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:22:53.185570 | orchestrator | 2026-02-05 00:22:53.185583 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-05 00:22:53.185596 | orchestrator | Thursday 05 February 2026 00:22:52 +0000 (0:00:00.052) 0:00:25.600 ***** 2026-02-05 00:22:53.185609 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:22:53.185621 | orchestrator | 2026-02-05 00:22:53.185634 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-05 00:22:53.185647 | orchestrator | Thursday 05 February 2026 00:22:52 +0000 (0:00:00.066) 0:00:25.667 ***** 2026-02-05 00:22:53.185661 | orchestrator | changed: [testbed-manager] 2026-02-05 00:22:53.185673 | orchestrator | 2026-02-05 00:22:53.185686 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:22:53.185700 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 00:22:53.185714 | orchestrator | 2026-02-05 00:22:53.185728 | orchestrator | 2026-02-05 00:22:53.185741 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:22:53.185755 | orchestrator | Thursday 05 February 2026 00:22:52 +0000 (0:00:00.706) 0:00:26.373 ***** 2026-02-05 00:22:53.185768 | orchestrator | =============================================================================== 2026-02-05 00:22:53.185780 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.65s 2026-02-05 00:22:53.185793 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.13s 2026-02-05 00:22:53.185806 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-02-05 00:22:53.185819 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-02-05 00:22:53.185831 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-02-05 00:22:53.185844 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-02-05 00:22:53.185858 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-02-05 00:22:53.185870 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-02-05 00:22:53.185881 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-02-05 00:22:53.185892 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-02-05 00:22:53.185903 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-02-05 00:22:53.186077 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-02-05 00:22:53.186099 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-02-05 00:22:53.186110 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-02-05 00:22:53.186120 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-02-05 00:22:53.186153 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.92s 2026-02-05 00:22:53.186164 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.71s 2026-02-05 00:22:53.186175 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-02-05 00:22:53.186186 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-02-05 00:22:53.186197 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2026-02-05 00:22:53.461724 | orchestrator | + osism apply squid 2026-02-05 00:23:05.495608 | orchestrator | 2026-02-05 00:23:05 | INFO  | Prepare task for execution of squid. 2026-02-05 00:23:05.563459 | orchestrator | 2026-02-05 00:23:05 | INFO  | Task 29a82d07-f0b1-4dbf-b7a7-35612a1e3f34 (squid) was prepared for execution. 2026-02-05 00:23:05.563549 | orchestrator | 2026-02-05 00:23:05 | INFO  | It takes a moment until task 29a82d07-f0b1-4dbf-b7a7-35612a1e3f34 (squid) has been started and output is visible here. 2026-02-05 00:25:01.798773 | orchestrator | 2026-02-05 00:25:01.798866 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-05 00:25:01.798877 | orchestrator | 2026-02-05 00:25:01.798886 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-05 00:25:01.798893 | orchestrator | Thursday 05 February 2026 00:23:09 +0000 (0:00:00.158) 0:00:00.158 ***** 2026-02-05 00:25:01.798900 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 00:25:01.798908 | orchestrator | 2026-02-05 00:25:01.798914 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-05 00:25:01.798921 | orchestrator | Thursday 05 February 2026 00:23:09 +0000 (0:00:00.082) 0:00:00.240 ***** 2026-02-05 00:25:01.798927 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:01.798934 | orchestrator | 2026-02-05 00:25:01.798940 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-05 00:25:01.798947 | orchestrator | Thursday 05 February 2026 00:23:11 +0000 (0:00:01.417) 0:00:01.658 ***** 2026-02-05 00:25:01.798953 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-05 00:25:01.798960 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-05 00:25:01.798966 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-05 00:25:01.798972 | orchestrator | 2026-02-05 00:25:01.798978 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-05 00:25:01.798984 | orchestrator | Thursday 05 February 2026 00:23:12 +0000 (0:00:01.123) 0:00:02.781 ***** 2026-02-05 00:25:01.798991 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-05 00:25:01.798997 | orchestrator | 2026-02-05 00:25:01.799003 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-05 00:25:01.799009 | orchestrator | Thursday 05 February 2026 00:23:13 +0000 (0:00:01.050) 0:00:03.831 ***** 2026-02-05 00:25:01.799016 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:01.799022 | orchestrator | 2026-02-05 00:25:01.799028 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-05 00:25:01.799049 | orchestrator | Thursday 05 February 2026 00:23:13 +0000 (0:00:00.331) 0:00:04.163 ***** 2026-02-05 00:25:01.799055 | orchestrator | changed: [testbed-manager] 2026-02-05 00:25:01.799062 | orchestrator | 2026-02-05 00:25:01.799068 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-05 00:25:01.799074 | orchestrator | Thursday 05 February 2026 00:23:14 +0000 (0:00:00.872) 0:00:05.036 ***** 2026-02-05 00:25:01.799080 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-05 00:25:01.799087 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:01.799093 | orchestrator | 2026-02-05 00:25:01.799100 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-05 00:25:01.799106 | orchestrator | Thursday 05 February 2026 00:23:48 +0000 (0:00:34.411) 0:00:39.447 ***** 2026-02-05 00:25:01.799112 | orchestrator | changed: [testbed-manager] 2026-02-05 00:25:01.799118 | orchestrator | 2026-02-05 00:25:01.799124 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-05 00:25:01.799130 | orchestrator | Thursday 05 February 2026 00:24:00 +0000 (0:00:11.904) 0:00:51.352 ***** 2026-02-05 00:25:01.799137 | orchestrator | Pausing for 60 seconds 2026-02-05 00:25:01.799143 | orchestrator | changed: [testbed-manager] 2026-02-05 00:25:01.799149 | orchestrator | 2026-02-05 00:25:01.799156 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-05 00:25:01.799162 | orchestrator | Thursday 05 February 2026 00:25:00 +0000 (0:01:00.091) 0:01:51.443 ***** 2026-02-05 00:25:01.799168 | orchestrator | ok: [testbed-manager] 2026-02-05 00:25:01.799174 | orchestrator | 2026-02-05 00:25:01.799180 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-05 00:25:01.799204 | orchestrator | Thursday 05 February 2026 00:25:00 +0000 (0:00:00.074) 0:01:51.518 ***** 2026-02-05 00:25:01.799211 | orchestrator | changed: [testbed-manager] 2026-02-05 00:25:01.799217 | orchestrator | 2026-02-05 00:25:01.799223 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:25:01.799229 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:25:01.799235 | orchestrator | 2026-02-05 00:25:01.799241 | orchestrator | 2026-02-05 00:25:01.799248 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:25:01.799254 | orchestrator | Thursday 05 February 2026 00:25:01 +0000 (0:00:00.583) 0:01:52.101 ***** 2026-02-05 00:25:01.799261 | orchestrator | =============================================================================== 2026-02-05 00:25:01.799267 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-02-05 00:25:01.799273 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 34.41s 2026-02-05 00:25:01.799279 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.90s 2026-02-05 00:25:01.799285 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.42s 2026-02-05 00:25:01.799291 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.12s 2026-02-05 00:25:01.799297 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.05s 2026-02-05 00:25:01.799303 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.87s 2026-02-05 00:25:01.799350 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.58s 2026-02-05 00:25:01.799358 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.33s 2026-02-05 00:25:01.799366 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-02-05 00:25:01.799373 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-02-05 00:25:02.078459 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-05 00:25:02.078558 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-02-05 00:25:02.086382 | orchestrator | + set -e 2026-02-05 00:25:02.086547 | orchestrator | + NAMESPACE=kolla 2026-02-05 00:25:02.086565 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-05 00:25:02.091180 | orchestrator | ++ semver latest 9.0.0 2026-02-05 00:25:02.145308 | orchestrator | + [[ -1 -lt 0 ]] 2026-02-05 00:25:02.145422 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-02-05 00:25:02.145824 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-05 00:25:14.176445 | orchestrator | 2026-02-05 00:25:14 | INFO  | Prepare task for execution of operator. 2026-02-05 00:25:14.261276 | orchestrator | 2026-02-05 00:25:14 | INFO  | Task 48b3f745-dcac-44ba-b661-aeae413538c3 (operator) was prepared for execution. 2026-02-05 00:25:14.261401 | orchestrator | 2026-02-05 00:25:14 | INFO  | It takes a moment until task 48b3f745-dcac-44ba-b661-aeae413538c3 (operator) has been started and output is visible here. 2026-02-05 00:25:30.604112 | orchestrator | 2026-02-05 00:25:30.604222 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-05 00:25:30.604241 | orchestrator | 2026-02-05 00:25:30.604253 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 00:25:30.604266 | orchestrator | Thursday 05 February 2026 00:25:18 +0000 (0:00:00.107) 0:00:00.107 ***** 2026-02-05 00:25:30.604277 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:25:30.604290 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:30.604301 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:25:30.604312 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:30.604323 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:25:30.604333 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:30.604392 | orchestrator | 2026-02-05 00:25:30.604406 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-05 00:25:30.604444 | orchestrator | Thursday 05 February 2026 00:25:22 +0000 (0:00:04.140) 0:00:04.247 ***** 2026-02-05 00:25:30.604456 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:25:30.604467 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:25:30.604477 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:30.604488 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:30.604498 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:25:30.604509 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:30.604520 | orchestrator | 2026-02-05 00:25:30.604531 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-05 00:25:30.604542 | orchestrator | 2026-02-05 00:25:30.604552 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-05 00:25:30.604563 | orchestrator | Thursday 05 February 2026 00:25:22 +0000 (0:00:00.748) 0:00:04.995 ***** 2026-02-05 00:25:30.604574 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:25:30.604585 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:25:30.604596 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:25:30.604606 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:30.604617 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:30.604627 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:30.604640 | orchestrator | 2026-02-05 00:25:30.604652 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-05 00:25:30.604683 | orchestrator | Thursday 05 February 2026 00:25:23 +0000 (0:00:00.144) 0:00:05.140 ***** 2026-02-05 00:25:30.604696 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:25:30.604708 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:25:30.604720 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:25:30.604732 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:25:30.604743 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:25:30.604757 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:25:30.604769 | orchestrator | 2026-02-05 00:25:30.604782 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-05 00:25:30.604794 | orchestrator | Thursday 05 February 2026 00:25:23 +0000 (0:00:00.135) 0:00:05.275 ***** 2026-02-05 00:25:30.604807 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:25:30.604820 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:25:30.604833 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:25:30.604846 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:25:30.604859 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:25:30.604870 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:25:30.604883 | orchestrator | 2026-02-05 00:25:30.604896 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-05 00:25:30.604908 | orchestrator | Thursday 05 February 2026 00:25:23 +0000 (0:00:00.708) 0:00:05.984 ***** 2026-02-05 00:25:30.604921 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:25:30.604932 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:25:30.604945 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:25:30.604957 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:25:30.604970 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:25:30.604982 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:25:30.604995 | orchestrator | 2026-02-05 00:25:30.605008 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-05 00:25:30.605022 | orchestrator | Thursday 05 February 2026 00:25:24 +0000 (0:00:00.796) 0:00:06.781 ***** 2026-02-05 00:25:30.605033 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-05 00:25:30.605044 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-05 00:25:30.605054 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-05 00:25:30.605065 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-05 00:25:30.605075 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-05 00:25:30.605086 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-05 00:25:30.605097 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-05 00:25:30.605107 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-05 00:25:30.605118 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-05 00:25:30.605137 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-05 00:25:30.605148 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-05 00:25:30.605158 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-05 00:25:30.605169 | orchestrator | 2026-02-05 00:25:30.605180 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-05 00:25:30.605191 | orchestrator | Thursday 05 February 2026 00:25:25 +0000 (0:00:01.234) 0:00:08.015 ***** 2026-02-05 00:25:30.605201 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:25:30.605212 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:25:30.605222 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:25:30.605233 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:25:30.605243 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:25:30.605254 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:25:30.605265 | orchestrator | 2026-02-05 00:25:30.605275 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-05 00:25:30.605287 | orchestrator | Thursday 05 February 2026 00:25:27 +0000 (0:00:01.180) 0:00:09.196 ***** 2026-02-05 00:25:30.605298 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 00:25:30.605309 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 00:25:30.605319 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 00:25:30.605395 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 00:25:30.605407 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 00:25:30.605437 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-05 00:25:30.605449 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-05 00:25:30.605460 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-05 00:25:30.605471 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-05 00:25:30.605482 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-05 00:25:30.605492 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-05 00:25:30.605503 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-05 00:25:30.605514 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-05 00:25:30.605524 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-05 00:25:30.605535 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-05 00:25:30.605546 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-05 00:25:30.605562 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-05 00:25:30.605574 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-05 00:25:30.605585 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-05 00:25:30.605595 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-05 00:25:30.605606 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-05 00:25:30.605616 | orchestrator | 2026-02-05 00:25:30.605627 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-05 00:25:30.605639 | orchestrator | Thursday 05 February 2026 00:25:28 +0000 (0:00:01.311) 0:00:10.508 ***** 2026-02-05 00:25:30.605650 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:25:30.605660 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:25:30.605671 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:25:30.605681 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:25:30.605692 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:25:30.605703 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:25:30.605713 | orchestrator | 2026-02-05 00:25:30.605724 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-05 00:25:30.605742 | orchestrator | Thursday 05 February 2026 00:25:28 +0000 (0:00:00.162) 0:00:10.670 ***** 2026-02-05 00:25:30.605753 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:25:30.605764 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:25:30.605774 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:25:30.605785 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:25:30.605795 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:25:30.605806 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:25:30.605816 | orchestrator | 2026-02-05 00:25:30.605827 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-05 00:25:30.605838 | orchestrator | Thursday 05 February 2026 00:25:28 +0000 (0:00:00.189) 0:00:10.859 ***** 2026-02-05 00:25:30.605849 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:25:30.605859 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:25:30.605870 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:25:30.605880 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:25:30.605891 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:25:30.605901 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:25:30.605912 | orchestrator | 2026-02-05 00:25:30.605923 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-05 00:25:30.605934 | orchestrator | Thursday 05 February 2026 00:25:29 +0000 (0:00:00.579) 0:00:11.438 ***** 2026-02-05 00:25:30.605944 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:25:30.605955 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:25:30.605965 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:25:30.605976 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:25:30.605986 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:25:30.605997 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:25:30.606008 | orchestrator | 2026-02-05 00:25:30.606155 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-05 00:25:30.606169 | orchestrator | Thursday 05 February 2026 00:25:29 +0000 (0:00:00.168) 0:00:11.606 ***** 2026-02-05 00:25:30.606180 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-05 00:25:30.606213 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 00:25:30.606225 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:25:30.606236 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:25:30.606246 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 00:25:30.606257 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:25:30.606267 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-05 00:25:30.606278 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:25:30.606289 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 00:25:30.606299 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:25:30.606310 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 00:25:30.606320 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:25:30.606331 | orchestrator | 2026-02-05 00:25:30.606341 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-05 00:25:30.606371 | orchestrator | Thursday 05 February 2026 00:25:30 +0000 (0:00:00.730) 0:00:12.337 ***** 2026-02-05 00:25:30.606382 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:25:30.606393 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:25:30.606403 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:25:30.606414 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:25:30.606425 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:25:30.606435 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:25:30.606446 | orchestrator | 2026-02-05 00:25:30.606457 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-05 00:25:30.606468 | orchestrator | Thursday 05 February 2026 00:25:30 +0000 (0:00:00.161) 0:00:12.499 ***** 2026-02-05 00:25:30.606479 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:25:30.606489 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:25:30.606500 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:25:30.606511 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:25:30.606540 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:25:31.931223 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:25:31.931324 | orchestrator | 2026-02-05 00:25:31.931341 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-05 00:25:31.931461 | orchestrator | Thursday 05 February 2026 00:25:30 +0000 (0:00:00.188) 0:00:12.687 ***** 2026-02-05 00:25:31.931474 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:25:31.931485 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:25:31.931497 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:25:31.931508 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:25:31.931519 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:25:31.931529 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:25:31.931540 | orchestrator | 2026-02-05 00:25:31.931551 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-05 00:25:31.931563 | orchestrator | Thursday 05 February 2026 00:25:30 +0000 (0:00:00.163) 0:00:12.850 ***** 2026-02-05 00:25:31.931573 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:25:31.931584 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:25:31.931595 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:25:31.931606 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:25:31.931617 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:25:31.931628 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:25:31.931639 | orchestrator | 2026-02-05 00:25:31.931650 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-05 00:25:31.931661 | orchestrator | Thursday 05 February 2026 00:25:31 +0000 (0:00:00.653) 0:00:13.504 ***** 2026-02-05 00:25:31.931672 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:25:31.931682 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:25:31.931693 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:25:31.931704 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:25:31.931715 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:25:31.931725 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:25:31.931736 | orchestrator | 2026-02-05 00:25:31.931747 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:25:31.931782 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 00:25:31.931799 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 00:25:31.931812 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 00:25:31.931824 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 00:25:31.931838 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 00:25:31.931850 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 00:25:31.931862 | orchestrator | 2026-02-05 00:25:31.931875 | orchestrator | 2026-02-05 00:25:31.931888 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:25:31.931902 | orchestrator | Thursday 05 February 2026 00:25:31 +0000 (0:00:00.248) 0:00:13.752 ***** 2026-02-05 00:25:31.931916 | orchestrator | =============================================================================== 2026-02-05 00:25:31.931926 | orchestrator | Gathering Facts --------------------------------------------------------- 4.14s 2026-02-05 00:25:31.931937 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.31s 2026-02-05 00:25:31.931949 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.23s 2026-02-05 00:25:31.931981 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.18s 2026-02-05 00:25:31.931993 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2026-02-05 00:25:31.932003 | orchestrator | Do not require tty for all users ---------------------------------------- 0.75s 2026-02-05 00:25:31.932014 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2026-02-05 00:25:31.932025 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.71s 2026-02-05 00:25:31.932036 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2026-02-05 00:25:31.932046 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.58s 2026-02-05 00:25:31.932057 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2026-02-05 00:25:31.932068 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-02-05 00:25:31.932079 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.19s 2026-02-05 00:25:31.932090 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2026-02-05 00:25:31.932101 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2026-02-05 00:25:31.932112 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2026-02-05 00:25:31.932123 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2026-02-05 00:25:31.932134 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2026-02-05 00:25:31.932145 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.14s 2026-02-05 00:25:32.234310 | orchestrator | + osism apply --environment custom facts 2026-02-05 00:25:34.246171 | orchestrator | 2026-02-05 00:25:34 | INFO  | Trying to run play facts in environment custom 2026-02-05 00:25:44.274507 | orchestrator | 2026-02-05 00:25:44 | INFO  | Prepare task for execution of facts. 2026-02-05 00:25:44.338709 | orchestrator | 2026-02-05 00:25:44 | INFO  | Task 7b552ddd-faef-48c3-85c6-03c8f8a808d4 (facts) was prepared for execution. 2026-02-05 00:25:44.338809 | orchestrator | 2026-02-05 00:25:44 | INFO  | It takes a moment until task 7b552ddd-faef-48c3-85c6-03c8f8a808d4 (facts) has been started and output is visible here. 2026-02-05 00:26:28.903936 | orchestrator | 2026-02-05 00:26:28.904061 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-05 00:26:28.904081 | orchestrator | 2026-02-05 00:26:28.904105 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-05 00:26:28.904137 | orchestrator | Thursday 05 February 2026 00:25:48 +0000 (0:00:00.074) 0:00:00.074 ***** 2026-02-05 00:26:28.904151 | orchestrator | ok: [testbed-manager] 2026-02-05 00:26:28.904165 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:26:28.904179 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:26:28.904193 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:26:28.904203 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:26:28.904224 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:26:28.904234 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:26:28.904243 | orchestrator | 2026-02-05 00:26:28.904253 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-05 00:26:28.904263 | orchestrator | Thursday 05 February 2026 00:25:49 +0000 (0:00:01.388) 0:00:01.463 ***** 2026-02-05 00:26:28.904273 | orchestrator | ok: [testbed-manager] 2026-02-05 00:26:28.904282 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:26:28.904293 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:26:28.904302 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:26:28.904313 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:26:28.904323 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:26:28.904333 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:26:28.904344 | orchestrator | 2026-02-05 00:26:28.904378 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-05 00:26:28.904389 | orchestrator | 2026-02-05 00:26:28.904399 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-05 00:26:28.904410 | orchestrator | Thursday 05 February 2026 00:25:51 +0000 (0:00:01.272) 0:00:02.736 ***** 2026-02-05 00:26:28.904456 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:28.904467 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:28.904478 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:28.904488 | orchestrator | 2026-02-05 00:26:28.904499 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-05 00:26:28.904511 | orchestrator | Thursday 05 February 2026 00:25:51 +0000 (0:00:00.099) 0:00:02.835 ***** 2026-02-05 00:26:28.904521 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:28.904531 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:28.904540 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:28.904550 | orchestrator | 2026-02-05 00:26:28.904560 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-05 00:26:28.904569 | orchestrator | Thursday 05 February 2026 00:25:51 +0000 (0:00:00.201) 0:00:03.036 ***** 2026-02-05 00:26:28.904578 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:28.904587 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:28.904595 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:28.904604 | orchestrator | 2026-02-05 00:26:28.904614 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-05 00:26:28.904623 | orchestrator | Thursday 05 February 2026 00:25:51 +0000 (0:00:00.245) 0:00:03.281 ***** 2026-02-05 00:26:28.904634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:26:28.904645 | orchestrator | 2026-02-05 00:26:28.904654 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-05 00:26:28.904663 | orchestrator | Thursday 05 February 2026 00:25:51 +0000 (0:00:00.133) 0:00:03.415 ***** 2026-02-05 00:26:28.904672 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:28.904680 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:28.904689 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:28.904698 | orchestrator | 2026-02-05 00:26:28.904707 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-05 00:26:28.904716 | orchestrator | Thursday 05 February 2026 00:25:52 +0000 (0:00:00.405) 0:00:03.820 ***** 2026-02-05 00:26:28.904725 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:26:28.904735 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:26:28.904745 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:26:28.904754 | orchestrator | 2026-02-05 00:26:28.904763 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-05 00:26:28.904773 | orchestrator | Thursday 05 February 2026 00:25:52 +0000 (0:00:00.121) 0:00:03.942 ***** 2026-02-05 00:26:28.904782 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:26:28.904792 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:26:28.904811 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:26:28.904820 | orchestrator | 2026-02-05 00:26:28.904839 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-05 00:26:28.904848 | orchestrator | Thursday 05 February 2026 00:25:53 +0000 (0:00:01.018) 0:00:04.961 ***** 2026-02-05 00:26:28.904856 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:28.904865 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:28.904873 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:28.904882 | orchestrator | 2026-02-05 00:26:28.904892 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-05 00:26:28.904910 | orchestrator | Thursday 05 February 2026 00:25:53 +0000 (0:00:00.446) 0:00:05.407 ***** 2026-02-05 00:26:28.904919 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:26:28.904929 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:26:28.904938 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:26:28.904947 | orchestrator | 2026-02-05 00:26:28.904966 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-05 00:26:28.904975 | orchestrator | Thursday 05 February 2026 00:25:54 +0000 (0:00:01.065) 0:00:06.473 ***** 2026-02-05 00:26:28.904985 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:26:28.904994 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:26:28.905002 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:26:28.905010 | orchestrator | 2026-02-05 00:26:28.905020 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-05 00:26:28.905029 | orchestrator | Thursday 05 February 2026 00:26:11 +0000 (0:00:16.485) 0:00:22.958 ***** 2026-02-05 00:26:28.905038 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:26:28.905073 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:26:28.905083 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:26:28.905093 | orchestrator | 2026-02-05 00:26:28.905112 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-05 00:26:28.905145 | orchestrator | Thursday 05 February 2026 00:26:11 +0000 (0:00:00.109) 0:00:23.067 ***** 2026-02-05 00:26:28.905154 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:26:28.905164 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:26:28.905173 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:26:28.905182 | orchestrator | 2026-02-05 00:26:28.905192 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-05 00:26:28.905202 | orchestrator | Thursday 05 February 2026 00:26:19 +0000 (0:00:08.078) 0:00:31.146 ***** 2026-02-05 00:26:28.905212 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:28.905222 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:28.905232 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:28.905241 | orchestrator | 2026-02-05 00:26:28.905251 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-05 00:26:28.905260 | orchestrator | Thursday 05 February 2026 00:26:20 +0000 (0:00:00.439) 0:00:31.586 ***** 2026-02-05 00:26:28.905269 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-05 00:26:28.905278 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-05 00:26:28.905286 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-05 00:26:28.905295 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-05 00:26:28.905305 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-05 00:26:28.905315 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-05 00:26:28.905325 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-05 00:26:28.905335 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-05 00:26:28.905345 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-05 00:26:28.905356 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-05 00:26:28.905367 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-05 00:26:28.905377 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-05 00:26:28.905387 | orchestrator | 2026-02-05 00:26:28.905397 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-05 00:26:28.905406 | orchestrator | Thursday 05 February 2026 00:26:23 +0000 (0:00:03.567) 0:00:35.153 ***** 2026-02-05 00:26:28.905438 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:28.905449 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:28.905474 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:28.905484 | orchestrator | 2026-02-05 00:26:28.905494 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-05 00:26:28.905504 | orchestrator | 2026-02-05 00:26:28.905514 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 00:26:28.905584 | orchestrator | Thursday 05 February 2026 00:26:25 +0000 (0:00:01.410) 0:00:36.564 ***** 2026-02-05 00:26:28.905596 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:26:28.905618 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:26:28.905637 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:26:28.905647 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:28.905658 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:28.905668 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:28.905678 | orchestrator | ok: [testbed-manager] 2026-02-05 00:26:28.905688 | orchestrator | 2026-02-05 00:26:28.905696 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:26:28.905706 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:26:28.905716 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:26:28.905726 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:26:28.905735 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:26:28.905745 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:26:28.905763 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:26:28.905772 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:26:28.905781 | orchestrator | 2026-02-05 00:26:28.905789 | orchestrator | 2026-02-05 00:26:28.905798 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:26:28.905807 | orchestrator | Thursday 05 February 2026 00:26:28 +0000 (0:00:03.807) 0:00:40.372 ***** 2026-02-05 00:26:28.905815 | orchestrator | =============================================================================== 2026-02-05 00:26:28.905824 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.49s 2026-02-05 00:26:28.905833 | orchestrator | Install required packages (Debian) -------------------------------------- 8.08s 2026-02-05 00:26:28.905842 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.81s 2026-02-05 00:26:28.905851 | orchestrator | Copy fact files --------------------------------------------------------- 3.57s 2026-02-05 00:26:28.905860 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.41s 2026-02-05 00:26:28.905869 | orchestrator | Create custom facts directory ------------------------------------------- 1.39s 2026-02-05 00:26:28.905892 | orchestrator | Copy fact file ---------------------------------------------------------- 1.27s 2026-02-05 00:26:29.097685 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2026-02-05 00:26:29.097837 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.02s 2026-02-05 00:26:29.097855 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2026-02-05 00:26:29.097865 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2026-02-05 00:26:29.097875 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.41s 2026-02-05 00:26:29.097885 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.25s 2026-02-05 00:26:29.097895 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2026-02-05 00:26:29.097905 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2026-02-05 00:26:29.097916 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2026-02-05 00:26:29.097925 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-02-05 00:26:29.097935 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-02-05 00:26:29.415963 | orchestrator | + osism apply bootstrap 2026-02-05 00:26:41.552294 | orchestrator | 2026-02-05 00:26:41 | INFO  | Prepare task for execution of bootstrap. 2026-02-05 00:26:41.622139 | orchestrator | 2026-02-05 00:26:41 | INFO  | Task 6d16649c-505d-4ea2-ad85-6d4aebbb8bc8 (bootstrap) was prepared for execution. 2026-02-05 00:26:41.622277 | orchestrator | 2026-02-05 00:26:41 | INFO  | It takes a moment until task 6d16649c-505d-4ea2-ad85-6d4aebbb8bc8 (bootstrap) has been started and output is visible here. 2026-02-05 00:26:59.087177 | orchestrator | 2026-02-05 00:26:59.087311 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-05 00:26:59.087329 | orchestrator | 2026-02-05 00:26:59.087342 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-05 00:26:59.087354 | orchestrator | Thursday 05 February 2026 00:26:46 +0000 (0:00:00.156) 0:00:00.156 ***** 2026-02-05 00:26:59.087365 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:59.087381 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:59.087399 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:59.087418 | orchestrator | ok: [testbed-manager] 2026-02-05 00:26:59.087435 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:26:59.087532 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:26:59.087546 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:26:59.087557 | orchestrator | 2026-02-05 00:26:59.087568 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-05 00:26:59.087579 | orchestrator | 2026-02-05 00:26:59.087590 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 00:26:59.087601 | orchestrator | Thursday 05 February 2026 00:26:46 +0000 (0:00:00.277) 0:00:00.433 ***** 2026-02-05 00:26:59.087612 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:26:59.087624 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:26:59.087635 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:26:59.087646 | orchestrator | ok: [testbed-manager] 2026-02-05 00:26:59.087656 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:59.087669 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:59.087683 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:59.087696 | orchestrator | 2026-02-05 00:26:59.087710 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-05 00:26:59.087724 | orchestrator | 2026-02-05 00:26:59.087736 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 00:26:59.087749 | orchestrator | Thursday 05 February 2026 00:26:50 +0000 (0:00:03.664) 0:00:04.098 ***** 2026-02-05 00:26:59.087763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:26:59.087776 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:26:59.087789 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-05 00:26:59.087801 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:26:59.087814 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-05 00:26:59.087827 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-05 00:26:59.087840 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-05 00:26:59.087853 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-05 00:26:59.087866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 00:26:59.087878 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-05 00:26:59.087891 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-05 00:26:59.087905 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 00:26:59.087917 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-05 00:26:59.087930 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-05 00:26:59.087942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 00:26:59.087955 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-05 00:26:59.087994 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-05 00:26:59.088007 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-05 00:26:59.088019 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:26:59.088032 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-05 00:26:59.088045 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-05 00:26:59.088058 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-05 00:26:59.088070 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-05 00:26:59.088081 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-05 00:26:59.088092 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-05 00:26:59.088102 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-05 00:26:59.088113 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-05 00:26:59.088124 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-05 00:26:59.088135 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:26:59.088145 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-05 00:26:59.088156 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-05 00:26:59.088166 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:26:59.088178 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-05 00:26:59.088188 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-05 00:26:59.088199 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 00:26:59.088210 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-05 00:26:59.088220 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 00:26:59.088231 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-05 00:26:59.088241 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-05 00:26:59.088252 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 00:26:59.088263 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-05 00:26:59.088273 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:26:59.088284 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-05 00:26:59.088295 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-05 00:26:59.088305 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-05 00:26:59.088316 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-05 00:26:59.088345 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-05 00:26:59.088357 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:26:59.088368 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-05 00:26:59.088378 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-05 00:26:59.088389 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:26:59.088399 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-05 00:26:59.088410 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-05 00:26:59.088420 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-05 00:26:59.088431 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-05 00:26:59.088442 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:26:59.088484 | orchestrator | 2026-02-05 00:26:59.088495 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-05 00:26:59.088506 | orchestrator | 2026-02-05 00:26:59.088516 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-05 00:26:59.088527 | orchestrator | Thursday 05 February 2026 00:26:51 +0000 (0:00:00.511) 0:00:04.609 ***** 2026-02-05 00:26:59.088538 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:26:59.088549 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:26:59.088568 | orchestrator | ok: [testbed-manager] 2026-02-05 00:26:59.088579 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:26:59.088590 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:59.088601 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:59.088611 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:59.088622 | orchestrator | 2026-02-05 00:26:59.088658 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-05 00:26:59.088670 | orchestrator | Thursday 05 February 2026 00:26:53 +0000 (0:00:02.127) 0:00:06.736 ***** 2026-02-05 00:26:59.088680 | orchestrator | ok: [testbed-manager] 2026-02-05 00:26:59.088691 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:26:59.088702 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:26:59.088712 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:26:59.088723 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:26:59.088734 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:26:59.088766 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:26:59.088777 | orchestrator | 2026-02-05 00:26:59.088788 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-05 00:26:59.088799 | orchestrator | Thursday 05 February 2026 00:26:54 +0000 (0:00:01.335) 0:00:08.071 ***** 2026-02-05 00:26:59.088811 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:26:59.088825 | orchestrator | 2026-02-05 00:26:59.088836 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-05 00:26:59.088847 | orchestrator | Thursday 05 February 2026 00:26:54 +0000 (0:00:00.243) 0:00:08.315 ***** 2026-02-05 00:26:59.088858 | orchestrator | changed: [testbed-manager] 2026-02-05 00:26:59.088869 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:26:59.088879 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:26:59.088890 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:26:59.088901 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:26:59.088911 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:26:59.088922 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:26:59.088933 | orchestrator | 2026-02-05 00:26:59.088944 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-05 00:26:59.088955 | orchestrator | Thursday 05 February 2026 00:26:56 +0000 (0:00:01.968) 0:00:10.283 ***** 2026-02-05 00:26:59.088965 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:26:59.088977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:26:59.088990 | orchestrator | 2026-02-05 00:26:59.089001 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-05 00:26:59.089030 | orchestrator | Thursday 05 February 2026 00:26:56 +0000 (0:00:00.251) 0:00:10.534 ***** 2026-02-05 00:26:59.089056 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:26:59.089067 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:26:59.089078 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:26:59.089089 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:26:59.089104 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:26:59.089115 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:26:59.089125 | orchestrator | 2026-02-05 00:26:59.089136 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-05 00:26:59.089147 | orchestrator | Thursday 05 February 2026 00:26:58 +0000 (0:00:01.005) 0:00:11.540 ***** 2026-02-05 00:26:59.089158 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:26:59.089169 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:26:59.089179 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:26:59.089190 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:26:59.089201 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:26:59.089211 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:26:59.089229 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:26:59.089240 | orchestrator | 2026-02-05 00:26:59.089251 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-05 00:26:59.089262 | orchestrator | Thursday 05 February 2026 00:26:58 +0000 (0:00:00.521) 0:00:12.062 ***** 2026-02-05 00:26:59.089272 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:26:59.089283 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:26:59.089293 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:26:59.089304 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:26:59.089315 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:26:59.089325 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:26:59.089336 | orchestrator | ok: [testbed-manager] 2026-02-05 00:26:59.089347 | orchestrator | 2026-02-05 00:26:59.089358 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-05 00:26:59.089369 | orchestrator | Thursday 05 February 2026 00:26:58 +0000 (0:00:00.454) 0:00:12.516 ***** 2026-02-05 00:26:59.089380 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:26:59.089391 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:26:59.089410 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:27:10.970375 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:27:10.970550 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:27:10.970561 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:27:10.970567 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:27:10.970574 | orchestrator | 2026-02-05 00:27:10.970582 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-05 00:27:10.970590 | orchestrator | Thursday 05 February 2026 00:26:59 +0000 (0:00:00.183) 0:00:12.700 ***** 2026-02-05 00:27:10.970599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:27:10.970621 | orchestrator | 2026-02-05 00:27:10.970628 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-05 00:27:10.970636 | orchestrator | Thursday 05 February 2026 00:26:59 +0000 (0:00:00.285) 0:00:12.985 ***** 2026-02-05 00:27:10.970643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:27:10.970649 | orchestrator | 2026-02-05 00:27:10.970655 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-05 00:27:10.970661 | orchestrator | Thursday 05 February 2026 00:26:59 +0000 (0:00:00.386) 0:00:13.372 ***** 2026-02-05 00:27:10.970668 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:10.970675 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:10.970681 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:10.970687 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:27:10.970693 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:27:10.970699 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:27:10.970704 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:10.970710 | orchestrator | 2026-02-05 00:27:10.970716 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-05 00:27:10.970722 | orchestrator | Thursday 05 February 2026 00:27:01 +0000 (0:00:01.356) 0:00:14.728 ***** 2026-02-05 00:27:10.970728 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:27:10.970734 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:27:10.970740 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:27:10.970746 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:27:10.970752 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:27:10.970757 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:27:10.970763 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:27:10.970769 | orchestrator | 2026-02-05 00:27:10.970775 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-05 00:27:10.970812 | orchestrator | Thursday 05 February 2026 00:27:01 +0000 (0:00:00.232) 0:00:14.961 ***** 2026-02-05 00:27:10.970818 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:10.970824 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:10.970830 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:10.970836 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:10.970841 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:27:10.970847 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:27:10.970852 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:27:10.970858 | orchestrator | 2026-02-05 00:27:10.970864 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-05 00:27:10.970870 | orchestrator | Thursday 05 February 2026 00:27:01 +0000 (0:00:00.523) 0:00:15.484 ***** 2026-02-05 00:27:10.970875 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:27:10.970881 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:27:10.970887 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:27:10.970893 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:27:10.970898 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:27:10.970904 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:27:10.970910 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:27:10.970915 | orchestrator | 2026-02-05 00:27:10.970922 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-05 00:27:10.970928 | orchestrator | Thursday 05 February 2026 00:27:02 +0000 (0:00:00.249) 0:00:15.734 ***** 2026-02-05 00:27:10.970935 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:27:10.970940 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:10.970955 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:27:10.970961 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:27:10.970966 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:27:10.970972 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:27:10.970978 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:27:10.970983 | orchestrator | 2026-02-05 00:27:10.970989 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-05 00:27:10.970995 | orchestrator | Thursday 05 February 2026 00:27:02 +0000 (0:00:00.608) 0:00:16.343 ***** 2026-02-05 00:27:10.971001 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:10.971006 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:27:10.971012 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:27:10.971018 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:27:10.971024 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:27:10.971029 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:27:10.971035 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:27:10.971041 | orchestrator | 2026-02-05 00:27:10.971046 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-05 00:27:10.971052 | orchestrator | Thursday 05 February 2026 00:27:03 +0000 (0:00:01.115) 0:00:17.458 ***** 2026-02-05 00:27:10.971058 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:10.971064 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:27:10.971070 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:10.971076 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:27:10.971081 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:27:10.971087 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:10.971093 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:10.971098 | orchestrator | 2026-02-05 00:27:10.971104 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-05 00:27:10.971110 | orchestrator | Thursday 05 February 2026 00:27:05 +0000 (0:00:01.090) 0:00:18.548 ***** 2026-02-05 00:27:10.971132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:27:10.971139 | orchestrator | 2026-02-05 00:27:10.971145 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-05 00:27:10.971151 | orchestrator | Thursday 05 February 2026 00:27:05 +0000 (0:00:00.344) 0:00:18.893 ***** 2026-02-05 00:27:10.971163 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:27:10.971169 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:27:10.971175 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:27:10.971180 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:27:10.971186 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:27:10.971192 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:27:10.971197 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:27:10.971203 | orchestrator | 2026-02-05 00:27:10.971209 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-05 00:27:10.971215 | orchestrator | Thursday 05 February 2026 00:27:06 +0000 (0:00:01.273) 0:00:20.167 ***** 2026-02-05 00:27:10.971220 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:10.971226 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:10.971232 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:10.971237 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:10.971243 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:27:10.971249 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:27:10.971254 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:27:10.971260 | orchestrator | 2026-02-05 00:27:10.971266 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-05 00:27:10.971271 | orchestrator | Thursday 05 February 2026 00:27:06 +0000 (0:00:00.231) 0:00:20.398 ***** 2026-02-05 00:27:10.971277 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:10.971283 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:10.971289 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:10.971294 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:10.971300 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:27:10.971306 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:27:10.971311 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:27:10.971317 | orchestrator | 2026-02-05 00:27:10.971323 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-05 00:27:10.971329 | orchestrator | Thursday 05 February 2026 00:27:07 +0000 (0:00:00.232) 0:00:20.631 ***** 2026-02-05 00:27:10.971334 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:10.971340 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:10.971346 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:10.971351 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:10.971357 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:27:10.971362 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:27:10.971368 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:27:10.971374 | orchestrator | 2026-02-05 00:27:10.971379 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-05 00:27:10.971385 | orchestrator | Thursday 05 February 2026 00:27:07 +0000 (0:00:00.204) 0:00:20.836 ***** 2026-02-05 00:27:10.971392 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:27:10.971401 | orchestrator | 2026-02-05 00:27:10.971406 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-05 00:27:10.971412 | orchestrator | Thursday 05 February 2026 00:27:07 +0000 (0:00:00.251) 0:00:21.088 ***** 2026-02-05 00:27:10.971418 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:10.971423 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:10.971429 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:10.971435 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:10.971440 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:27:10.971446 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:27:10.971452 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:27:10.971457 | orchestrator | 2026-02-05 00:27:10.971482 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-05 00:27:10.971492 | orchestrator | Thursday 05 February 2026 00:27:08 +0000 (0:00:00.529) 0:00:21.617 ***** 2026-02-05 00:27:10.971502 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:27:10.971511 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:27:10.971529 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:27:10.971540 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:27:10.971547 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:27:10.971552 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:27:10.971558 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:27:10.971564 | orchestrator | 2026-02-05 00:27:10.971570 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-05 00:27:10.971576 | orchestrator | Thursday 05 February 2026 00:27:08 +0000 (0:00:00.231) 0:00:21.849 ***** 2026-02-05 00:27:10.971582 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:10.971588 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:10.971593 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:10.971599 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:10.971605 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:27:10.971610 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:27:10.971616 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:27:10.971622 | orchestrator | 2026-02-05 00:27:10.971628 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-05 00:27:10.971633 | orchestrator | Thursday 05 February 2026 00:27:09 +0000 (0:00:01.036) 0:00:22.886 ***** 2026-02-05 00:27:10.971639 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:10.971645 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:10.971650 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:10.971656 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:10.971662 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:27:10.971667 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:27:10.971673 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:27:10.971679 | orchestrator | 2026-02-05 00:27:10.971685 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-05 00:27:10.971690 | orchestrator | Thursday 05 February 2026 00:27:09 +0000 (0:00:00.538) 0:00:23.424 ***** 2026-02-05 00:27:10.971696 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:10.971702 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:10.971707 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:10.971713 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:10.971723 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:27:51.318445 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:27:51.318650 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:27:51.318670 | orchestrator | 2026-02-05 00:27:51.318683 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-05 00:27:51.318697 | orchestrator | Thursday 05 February 2026 00:27:10 +0000 (0:00:01.098) 0:00:24.523 ***** 2026-02-05 00:27:51.318708 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:51.318719 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:51.318730 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:51.318745 | orchestrator | changed: [testbed-manager] 2026-02-05 00:27:51.318756 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:27:51.318767 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:27:51.318778 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:27:51.318789 | orchestrator | 2026-02-05 00:27:51.318800 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-05 00:27:51.318811 | orchestrator | Thursday 05 February 2026 00:27:26 +0000 (0:00:15.969) 0:00:40.492 ***** 2026-02-05 00:27:51.318823 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:51.318834 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:51.318845 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:51.318856 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:51.318867 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:27:51.318878 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:27:51.318889 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:27:51.318900 | orchestrator | 2026-02-05 00:27:51.318911 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-05 00:27:51.318922 | orchestrator | Thursday 05 February 2026 00:27:27 +0000 (0:00:00.209) 0:00:40.702 ***** 2026-02-05 00:27:51.318933 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:51.318969 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:51.318983 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:51.318996 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:51.319008 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:27:51.319021 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:27:51.319034 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:27:51.319048 | orchestrator | 2026-02-05 00:27:51.319067 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-05 00:27:51.319085 | orchestrator | Thursday 05 February 2026 00:27:27 +0000 (0:00:00.219) 0:00:40.921 ***** 2026-02-05 00:27:51.319102 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:51.319120 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:51.319137 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:51.319155 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:51.319171 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:27:51.319189 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:27:51.319207 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:27:51.319226 | orchestrator | 2026-02-05 00:27:51.319244 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-05 00:27:51.319263 | orchestrator | Thursday 05 February 2026 00:27:27 +0000 (0:00:00.206) 0:00:41.128 ***** 2026-02-05 00:27:51.319284 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:27:51.319306 | orchestrator | 2026-02-05 00:27:51.319346 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-05 00:27:51.319367 | orchestrator | Thursday 05 February 2026 00:27:27 +0000 (0:00:00.274) 0:00:41.402 ***** 2026-02-05 00:27:51.319385 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:51.319405 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:51.319423 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:27:51.319442 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:51.319462 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:27:51.319479 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:51.319498 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:27:51.319543 | orchestrator | 2026-02-05 00:27:51.319555 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-05 00:27:51.319566 | orchestrator | Thursday 05 February 2026 00:27:29 +0000 (0:00:01.794) 0:00:43.197 ***** 2026-02-05 00:27:51.319577 | orchestrator | changed: [testbed-manager] 2026-02-05 00:27:51.319588 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:27:51.319599 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:27:51.319610 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:27:51.319621 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:27:51.319631 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:27:51.319648 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:27:51.319660 | orchestrator | 2026-02-05 00:27:51.319671 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-05 00:27:51.319682 | orchestrator | Thursday 05 February 2026 00:27:30 +0000 (0:00:01.094) 0:00:44.291 ***** 2026-02-05 00:27:51.319693 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:51.319703 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:51.319714 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:51.319725 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:27:51.319735 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:27:51.319746 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:27:51.319756 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:51.319767 | orchestrator | 2026-02-05 00:27:51.319778 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-05 00:27:51.319788 | orchestrator | Thursday 05 February 2026 00:27:32 +0000 (0:00:01.684) 0:00:45.976 ***** 2026-02-05 00:27:51.319800 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:27:51.319824 | orchestrator | 2026-02-05 00:27:51.319835 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-05 00:27:51.319847 | orchestrator | Thursday 05 February 2026 00:27:32 +0000 (0:00:00.301) 0:00:46.277 ***** 2026-02-05 00:27:51.319857 | orchestrator | changed: [testbed-manager] 2026-02-05 00:27:51.319868 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:27:51.319879 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:27:51.319890 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:27:51.319901 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:27:51.319911 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:27:51.319922 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:27:51.319933 | orchestrator | 2026-02-05 00:27:51.319966 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-05 00:27:51.319978 | orchestrator | Thursday 05 February 2026 00:27:33 +0000 (0:00:01.009) 0:00:47.287 ***** 2026-02-05 00:27:51.319989 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:27:51.319999 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:27:51.320010 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:27:51.320020 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:27:51.320031 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:27:51.320041 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:27:51.320052 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:27:51.320063 | orchestrator | 2026-02-05 00:27:51.320074 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-05 00:27:51.320085 | orchestrator | Thursday 05 February 2026 00:27:33 +0000 (0:00:00.214) 0:00:47.501 ***** 2026-02-05 00:27:51.320096 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:27:51.320107 | orchestrator | 2026-02-05 00:27:51.320118 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-05 00:27:51.320129 | orchestrator | Thursday 05 February 2026 00:27:34 +0000 (0:00:00.273) 0:00:47.775 ***** 2026-02-05 00:27:51.320139 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:51.320150 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:51.320161 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:27:51.320172 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:27:51.320183 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:51.320193 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:51.320207 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:27:51.320225 | orchestrator | 2026-02-05 00:27:51.320243 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-05 00:27:51.320268 | orchestrator | Thursday 05 February 2026 00:27:36 +0000 (0:00:01.850) 0:00:49.626 ***** 2026-02-05 00:27:51.320290 | orchestrator | changed: [testbed-manager] 2026-02-05 00:27:51.320308 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:27:51.320325 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:27:51.320342 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:27:51.320358 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:27:51.320375 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:27:51.320392 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:27:51.320410 | orchestrator | 2026-02-05 00:27:51.320427 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-05 00:27:51.320445 | orchestrator | Thursday 05 February 2026 00:27:37 +0000 (0:00:01.131) 0:00:50.757 ***** 2026-02-05 00:27:51.320465 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:27:51.320482 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:27:51.320500 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:27:51.320548 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:27:51.320567 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:27:51.320585 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:27:51.320618 | orchestrator | changed: [testbed-manager] 2026-02-05 00:27:51.320638 | orchestrator | 2026-02-05 00:27:51.320655 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-05 00:27:51.320674 | orchestrator | Thursday 05 February 2026 00:27:48 +0000 (0:00:11.121) 0:01:01.879 ***** 2026-02-05 00:27:51.320692 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:27:51.320710 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:27:51.320729 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:51.320747 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:27:51.320765 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:51.320783 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:51.320800 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:51.320811 | orchestrator | 2026-02-05 00:27:51.320822 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-05 00:27:51.320833 | orchestrator | Thursday 05 February 2026 00:27:49 +0000 (0:00:01.305) 0:01:03.184 ***** 2026-02-05 00:27:51.320844 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:51.320854 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:51.320865 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:51.320875 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:51.320886 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:27:51.320897 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:27:51.320907 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:27:51.320918 | orchestrator | 2026-02-05 00:27:51.320937 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-05 00:27:51.320948 | orchestrator | Thursday 05 February 2026 00:27:50 +0000 (0:00:00.907) 0:01:04.092 ***** 2026-02-05 00:27:51.320959 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:51.320969 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:51.320980 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:51.320991 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:51.321001 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:27:51.321012 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:27:51.321022 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:27:51.321033 | orchestrator | 2026-02-05 00:27:51.321044 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-05 00:27:51.321055 | orchestrator | Thursday 05 February 2026 00:27:50 +0000 (0:00:00.220) 0:01:04.312 ***** 2026-02-05 00:27:51.321066 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:27:51.321076 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:27:51.321087 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:27:51.321098 | orchestrator | ok: [testbed-manager] 2026-02-05 00:27:51.321116 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:27:51.321143 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:27:51.321161 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:27:51.321179 | orchestrator | 2026-02-05 00:27:51.321206 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-05 00:27:51.321225 | orchestrator | Thursday 05 February 2026 00:27:51 +0000 (0:00:00.241) 0:01:04.553 ***** 2026-02-05 00:27:51.321243 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:27:51.321263 | orchestrator | 2026-02-05 00:27:51.321299 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-05 00:30:11.161698 | orchestrator | Thursday 05 February 2026 00:27:51 +0000 (0:00:00.294) 0:01:04.848 ***** 2026-02-05 00:30:11.161838 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:11.161853 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:11.161862 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:11.161870 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:11.161877 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:11.161884 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:11.161892 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:11.161899 | orchestrator | 2026-02-05 00:30:11.161908 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-05 00:30:11.161938 | orchestrator | Thursday 05 February 2026 00:27:53 +0000 (0:00:01.740) 0:01:06.589 ***** 2026-02-05 00:30:11.161945 | orchestrator | changed: [testbed-manager] 2026-02-05 00:30:11.161952 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:30:11.161958 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:30:11.161964 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:30:11.161971 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:30:11.161977 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:30:11.161983 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:30:11.161990 | orchestrator | 2026-02-05 00:30:11.161997 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-05 00:30:11.162004 | orchestrator | Thursday 05 February 2026 00:27:53 +0000 (0:00:00.613) 0:01:07.202 ***** 2026-02-05 00:30:11.162009 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:11.162063 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:11.162071 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:11.162078 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:11.162084 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:11.162091 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:11.162098 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:11.162105 | orchestrator | 2026-02-05 00:30:11.162112 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-05 00:30:11.162120 | orchestrator | Thursday 05 February 2026 00:27:53 +0000 (0:00:00.219) 0:01:07.422 ***** 2026-02-05 00:30:11.162127 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:11.162134 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:11.162141 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:11.162148 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:11.162156 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:11.162163 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:11.162169 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:11.162176 | orchestrator | 2026-02-05 00:30:11.162184 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-05 00:30:11.162191 | orchestrator | Thursday 05 February 2026 00:27:55 +0000 (0:00:01.173) 0:01:08.595 ***** 2026-02-05 00:30:11.162198 | orchestrator | changed: [testbed-manager] 2026-02-05 00:30:11.162204 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:30:11.162211 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:30:11.162219 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:30:11.162227 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:30:11.162235 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:30:11.162243 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:30:11.162251 | orchestrator | 2026-02-05 00:30:11.162259 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-05 00:30:11.162267 | orchestrator | Thursday 05 February 2026 00:27:56 +0000 (0:00:01.770) 0:01:10.366 ***** 2026-02-05 00:30:11.162276 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:11.162284 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:11.162293 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:11.162302 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:11.162310 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:11.162316 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:11.162324 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:11.162332 | orchestrator | 2026-02-05 00:30:11.162339 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-05 00:30:11.162347 | orchestrator | Thursday 05 February 2026 00:27:59 +0000 (0:00:02.624) 0:01:12.991 ***** 2026-02-05 00:30:11.162354 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:11.162361 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:11.162367 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:11.162375 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:11.162382 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:11.162390 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:11.162397 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:11.162404 | orchestrator | 2026-02-05 00:30:11.162412 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-05 00:30:11.162441 | orchestrator | Thursday 05 February 2026 00:28:42 +0000 (0:00:43.533) 0:01:56.524 ***** 2026-02-05 00:30:11.162449 | orchestrator | changed: [testbed-manager] 2026-02-05 00:30:11.162456 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:30:11.162463 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:30:11.162470 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:30:11.162477 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:30:11.162484 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:30:11.162490 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:30:11.162497 | orchestrator | 2026-02-05 00:30:11.162504 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-05 00:30:11.162511 | orchestrator | Thursday 05 February 2026 00:30:03 +0000 (0:01:20.316) 0:03:16.841 ***** 2026-02-05 00:30:11.162518 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:11.162524 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:11.162532 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:11.162538 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:11.162545 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:11.162552 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:11.162559 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:11.162565 | orchestrator | 2026-02-05 00:30:11.162572 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-05 00:30:11.162579 | orchestrator | Thursday 05 February 2026 00:30:05 +0000 (0:00:01.899) 0:03:18.741 ***** 2026-02-05 00:30:11.162586 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:11.162593 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:11.162600 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:11.162606 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:11.162613 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:11.162619 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:11.162678 | orchestrator | changed: [testbed-manager] 2026-02-05 00:30:11.162686 | orchestrator | 2026-02-05 00:30:11.162694 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-05 00:30:11.162701 | orchestrator | Thursday 05 February 2026 00:30:10 +0000 (0:00:04.806) 0:03:23.547 ***** 2026-02-05 00:30:11.162737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-05 00:30:11.162772 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-05 00:30:11.162783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-05 00:30:11.162791 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-05 00:30:11.162808 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-05 00:30:11.162816 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-05 00:30:11.162828 | orchestrator | 2026-02-05 00:30:11.162836 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-05 00:30:11.162844 | orchestrator | Thursday 05 February 2026 00:30:10 +0000 (0:00:00.359) 0:03:23.907 ***** 2026-02-05 00:30:11.162851 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-05 00:30:11.162858 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:30:11.162864 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-05 00:30:11.162870 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-05 00:30:11.162877 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:30:11.162884 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:30:11.162896 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-05 00:30:11.162903 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:30:11.162911 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 00:30:11.162917 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 00:30:11.162924 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 00:30:11.162932 | orchestrator | 2026-02-05 00:30:11.162939 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-05 00:30:11.162947 | orchestrator | Thursday 05 February 2026 00:30:11 +0000 (0:00:00.721) 0:03:24.629 ***** 2026-02-05 00:30:11.162954 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-05 00:30:11.162963 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-05 00:30:11.162970 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-05 00:30:11.162978 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-05 00:30:11.162984 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-05 00:30:11.162999 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-05 00:30:19.963863 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-05 00:30:19.963977 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-05 00:30:19.963995 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-05 00:30:19.964007 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-05 00:30:19.964018 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-05 00:30:19.964029 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-05 00:30:19.964040 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-05 00:30:19.964051 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-05 00:30:19.964088 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-05 00:30:19.964101 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-05 00:30:19.964112 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-05 00:30:19.964124 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-05 00:30:19.964135 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-05 00:30:19.964145 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-05 00:30:19.964156 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-05 00:30:19.964167 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-05 00:30:19.964178 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-05 00:30:19.964189 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-05 00:30:19.964200 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-05 00:30:19.964211 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-05 00:30:19.964222 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-05 00:30:19.964233 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-05 00:30:19.964243 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-05 00:30:19.964255 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:30:19.964267 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-05 00:30:19.964278 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-05 00:30:19.964289 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:30:19.964300 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-05 00:30:19.964310 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:30:19.964321 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-05 00:30:19.964346 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-05 00:30:19.964357 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-05 00:30:19.964368 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-05 00:30:19.964379 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-05 00:30:19.964390 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-05 00:30:19.964401 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-05 00:30:19.964411 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-05 00:30:19.964422 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:30:19.964433 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-05 00:30:19.964444 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-05 00:30:19.964454 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-05 00:30:19.964474 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-05 00:30:19.964485 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-05 00:30:19.964513 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-05 00:30:19.964524 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-05 00:30:19.964535 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-05 00:30:19.964546 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-05 00:30:19.964557 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-05 00:30:19.964568 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-05 00:30:19.964578 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-05 00:30:19.964589 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-05 00:30:19.964600 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-05 00:30:19.964610 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-05 00:30:19.964621 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-05 00:30:19.964684 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-05 00:30:19.964696 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-05 00:30:19.964707 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-05 00:30:19.964718 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-05 00:30:19.964729 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-05 00:30:19.964739 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-05 00:30:19.964750 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-05 00:30:19.964761 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-05 00:30:19.964771 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-05 00:30:19.964782 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-05 00:30:19.964793 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-05 00:30:19.964804 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-05 00:30:19.964814 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-05 00:30:19.964825 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-05 00:30:19.964836 | orchestrator | 2026-02-05 00:30:19.964848 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-05 00:30:19.964859 | orchestrator | Thursday 05 February 2026 00:30:16 +0000 (0:00:04.977) 0:03:29.606 ***** 2026-02-05 00:30:19.964870 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 00:30:19.964881 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 00:30:19.964891 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 00:30:19.964908 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 00:30:19.964927 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 00:30:19.964938 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 00:30:19.964948 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-05 00:30:19.964959 | orchestrator | 2026-02-05 00:30:19.964970 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-05 00:30:19.964981 | orchestrator | Thursday 05 February 2026 00:30:17 +0000 (0:00:01.466) 0:03:31.073 ***** 2026-02-05 00:30:19.964992 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 00:30:19.965003 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:30:19.965014 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 00:30:19.965025 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 00:30:19.965035 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:30:19.965046 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:30:19.965057 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 00:30:19.965068 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:30:19.965079 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-05 00:30:19.965090 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-05 00:30:19.965115 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-05 00:30:32.034691 | orchestrator | 2026-02-05 00:30:32.034814 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-05 00:30:32.034834 | orchestrator | Thursday 05 February 2026 00:30:19 +0000 (0:00:02.444) 0:03:33.517 ***** 2026-02-05 00:30:32.034846 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 00:30:32.034859 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:30:32.034872 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 00:30:32.034884 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 00:30:32.034895 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:30:32.034906 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:30:32.034918 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-05 00:30:32.034929 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:30:32.034940 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-05 00:30:32.034951 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-05 00:30:32.034962 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-05 00:30:32.034973 | orchestrator | 2026-02-05 00:30:32.034984 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-05 00:30:32.035002 | orchestrator | Thursday 05 February 2026 00:30:20 +0000 (0:00:00.589) 0:03:34.106 ***** 2026-02-05 00:30:32.035027 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-05 00:30:32.035056 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:30:32.035076 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-05 00:30:32.035095 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:30:32.035114 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-05 00:30:32.035166 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:30:32.035187 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-05 00:30:32.035204 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:30:32.035225 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-05 00:30:32.035245 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-05 00:30:32.035265 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-05 00:30:32.035286 | orchestrator | 2026-02-05 00:30:32.035307 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-05 00:30:32.035330 | orchestrator | Thursday 05 February 2026 00:30:21 +0000 (0:00:00.546) 0:03:34.653 ***** 2026-02-05 00:30:32.035350 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:30:32.035369 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:30:32.035382 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:30:32.035395 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:30:32.035409 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:30:32.035422 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:30:32.035435 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:30:32.035447 | orchestrator | 2026-02-05 00:30:32.035461 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-05 00:30:32.035473 | orchestrator | Thursday 05 February 2026 00:30:21 +0000 (0:00:00.283) 0:03:34.936 ***** 2026-02-05 00:30:32.035486 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:32.035515 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:32.035528 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:32.035540 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:32.035553 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:32.035564 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:32.035575 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:32.035608 | orchestrator | 2026-02-05 00:30:32.035619 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-05 00:30:32.035630 | orchestrator | Thursday 05 February 2026 00:30:26 +0000 (0:00:05.061) 0:03:39.998 ***** 2026-02-05 00:30:32.035706 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-05 00:30:32.035721 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:30:32.035732 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-05 00:30:32.035743 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-05 00:30:32.035754 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:30:32.035765 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:30:32.035776 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-05 00:30:32.035787 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-05 00:30:32.035798 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:30:32.035809 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:30:32.035820 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-05 00:30:32.035831 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:30:32.035842 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-05 00:30:32.035852 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:30:32.035863 | orchestrator | 2026-02-05 00:30:32.035874 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-05 00:30:32.035885 | orchestrator | Thursday 05 February 2026 00:30:26 +0000 (0:00:00.283) 0:03:40.281 ***** 2026-02-05 00:30:32.035897 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-05 00:30:32.035908 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-05 00:30:32.035919 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-05 00:30:32.035950 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-05 00:30:32.035962 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-05 00:30:32.035973 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-05 00:30:32.035996 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-05 00:30:32.036007 | orchestrator | 2026-02-05 00:30:32.036018 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-05 00:30:32.036029 | orchestrator | Thursday 05 February 2026 00:30:27 +0000 (0:00:01.014) 0:03:41.296 ***** 2026-02-05 00:30:32.036042 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:30:32.036056 | orchestrator | 2026-02-05 00:30:32.036067 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-05 00:30:32.036078 | orchestrator | Thursday 05 February 2026 00:30:28 +0000 (0:00:00.381) 0:03:41.678 ***** 2026-02-05 00:30:32.036089 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:32.036100 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:32.036110 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:32.036121 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:32.036132 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:32.036143 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:32.036154 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:32.036164 | orchestrator | 2026-02-05 00:30:32.036175 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-05 00:30:32.036186 | orchestrator | Thursday 05 February 2026 00:30:29 +0000 (0:00:01.430) 0:03:43.108 ***** 2026-02-05 00:30:32.036197 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:32.036208 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:32.036219 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:32.036239 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:32.036260 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:32.036288 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:32.036330 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:32.036351 | orchestrator | 2026-02-05 00:30:32.036370 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-05 00:30:32.036390 | orchestrator | Thursday 05 February 2026 00:30:30 +0000 (0:00:00.622) 0:03:43.731 ***** 2026-02-05 00:30:32.036412 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:30:32.036432 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:30:32.036454 | orchestrator | changed: [testbed-manager] 2026-02-05 00:30:32.036475 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:30:32.036487 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:30:32.036497 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:30:32.036508 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:30:32.036524 | orchestrator | 2026-02-05 00:30:32.036546 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-05 00:30:32.036573 | orchestrator | Thursday 05 February 2026 00:30:30 +0000 (0:00:00.625) 0:03:44.356 ***** 2026-02-05 00:30:32.036591 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:32.036609 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:32.036627 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:32.036672 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:32.036690 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:32.036708 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:32.036725 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:32.036743 | orchestrator | 2026-02-05 00:30:32.036762 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-05 00:30:32.036781 | orchestrator | Thursday 05 February 2026 00:30:31 +0000 (0:00:00.636) 0:03:44.993 ***** 2026-02-05 00:30:32.036817 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770249908.5570056, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:30:32.036858 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770249920.009993, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:30:32.036880 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770249912.380978, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:30:32.036941 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770249923.018261, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:30:37.456555 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770249921.1639283, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:30:37.456737 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770249922.1799443, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:30:37.456757 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770249909.450721, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:30:37.456770 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:30:37.456824 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:30:37.456837 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:30:37.456848 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:30:37.456912 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:30:37.456926 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:30:37.456938 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 00:30:37.456950 | orchestrator | 2026-02-05 00:30:37.456964 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-05 00:30:37.456977 | orchestrator | Thursday 05 February 2026 00:30:32 +0000 (0:00:01.091) 0:03:46.085 ***** 2026-02-05 00:30:37.456989 | orchestrator | changed: [testbed-manager] 2026-02-05 00:30:37.457001 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:30:37.457012 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:30:37.457031 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:30:37.457045 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:30:37.457057 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:30:37.457070 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:30:37.457082 | orchestrator | 2026-02-05 00:30:37.457095 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-05 00:30:37.457108 | orchestrator | Thursday 05 February 2026 00:30:33 +0000 (0:00:01.123) 0:03:47.208 ***** 2026-02-05 00:30:37.457121 | orchestrator | changed: [testbed-manager] 2026-02-05 00:30:37.457133 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:30:37.457146 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:30:37.457160 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:30:37.457178 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:30:37.457191 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:30:37.457204 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:30:37.457215 | orchestrator | 2026-02-05 00:30:37.457226 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-05 00:30:37.457237 | orchestrator | Thursday 05 February 2026 00:30:34 +0000 (0:00:01.221) 0:03:48.429 ***** 2026-02-05 00:30:37.457248 | orchestrator | changed: [testbed-manager] 2026-02-05 00:30:37.457259 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:30:37.457269 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:30:37.457280 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:30:37.457291 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:30:37.457302 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:30:37.457312 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:30:37.457323 | orchestrator | 2026-02-05 00:30:37.457334 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-05 00:30:37.457345 | orchestrator | Thursday 05 February 2026 00:30:36 +0000 (0:00:01.201) 0:03:49.630 ***** 2026-02-05 00:30:37.457356 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:30:37.457367 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:30:37.457378 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:30:37.457397 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:30:37.457416 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:30:37.457435 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:30:37.457456 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:30:37.457476 | orchestrator | 2026-02-05 00:30:37.457498 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-05 00:30:37.457518 | orchestrator | Thursday 05 February 2026 00:30:36 +0000 (0:00:00.280) 0:03:49.911 ***** 2026-02-05 00:30:37.457539 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:30:37.457553 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:30:37.457564 | orchestrator | ok: [testbed-manager] 2026-02-05 00:30:37.457575 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:30:37.457585 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:30:37.457596 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:30:37.457607 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:30:37.457617 | orchestrator | 2026-02-05 00:30:37.457628 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-05 00:30:37.457639 | orchestrator | Thursday 05 February 2026 00:30:37 +0000 (0:00:00.704) 0:03:50.616 ***** 2026-02-05 00:30:37.457681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:30:37.457695 | orchestrator | 2026-02-05 00:30:37.457706 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-05 00:30:37.457726 | orchestrator | Thursday 05 February 2026 00:30:37 +0000 (0:00:00.357) 0:03:50.974 ***** 2026-02-05 00:31:52.338209 | orchestrator | ok: [testbed-manager] 2026-02-05 00:31:52.338344 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:31:52.338362 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:31:52.338373 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:31:52.338408 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:31:52.338420 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:31:52.338429 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:31:52.338436 | orchestrator | 2026-02-05 00:31:52.338444 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-05 00:31:52.338497 | orchestrator | Thursday 05 February 2026 00:30:46 +0000 (0:00:08.844) 0:03:59.818 ***** 2026-02-05 00:31:52.338503 | orchestrator | ok: [testbed-manager] 2026-02-05 00:31:52.338509 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:31:52.338515 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:31:52.338521 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:31:52.338526 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:31:52.338532 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:31:52.338538 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:31:52.338544 | orchestrator | 2026-02-05 00:31:52.338550 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-05 00:31:52.338555 | orchestrator | Thursday 05 February 2026 00:30:47 +0000 (0:00:01.464) 0:04:01.283 ***** 2026-02-05 00:31:52.338578 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:31:52.338584 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:31:52.338590 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:31:52.338596 | orchestrator | ok: [testbed-manager] 2026-02-05 00:31:52.338601 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:31:52.338607 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:31:52.338613 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:31:52.338619 | orchestrator | 2026-02-05 00:31:52.338625 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-05 00:31:52.338631 | orchestrator | Thursday 05 February 2026 00:30:48 +0000 (0:00:01.133) 0:04:02.416 ***** 2026-02-05 00:31:52.338637 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:31:52.338642 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:31:52.338648 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:31:52.338655 | orchestrator | ok: [testbed-manager] 2026-02-05 00:31:52.338666 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:31:52.338672 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:31:52.338678 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:31:52.338684 | orchestrator | 2026-02-05 00:31:52.338740 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-05 00:31:52.338753 | orchestrator | Thursday 05 February 2026 00:30:49 +0000 (0:00:00.293) 0:04:02.709 ***** 2026-02-05 00:31:52.338764 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:31:52.338774 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:31:52.338786 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:31:52.338797 | orchestrator | ok: [testbed-manager] 2026-02-05 00:31:52.338806 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:31:52.338813 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:31:52.338820 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:31:52.338827 | orchestrator | 2026-02-05 00:31:52.338833 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-05 00:31:52.338840 | orchestrator | Thursday 05 February 2026 00:30:49 +0000 (0:00:00.327) 0:04:03.037 ***** 2026-02-05 00:31:52.338846 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:31:52.338851 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:31:52.338857 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:31:52.338863 | orchestrator | ok: [testbed-manager] 2026-02-05 00:31:52.338868 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:31:52.338874 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:31:52.338880 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:31:52.338886 | orchestrator | 2026-02-05 00:31:52.338891 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-05 00:31:52.338897 | orchestrator | Thursday 05 February 2026 00:30:49 +0000 (0:00:00.297) 0:04:03.335 ***** 2026-02-05 00:31:52.338903 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:31:52.338918 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:31:52.338924 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:31:52.338949 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:31:52.338955 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:31:52.338961 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:31:52.338966 | orchestrator | ok: [testbed-manager] 2026-02-05 00:31:52.338972 | orchestrator | 2026-02-05 00:31:52.338978 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-05 00:31:52.338984 | orchestrator | Thursday 05 February 2026 00:30:54 +0000 (0:00:04.638) 0:04:07.973 ***** 2026-02-05 00:31:52.338992 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:31:52.339000 | orchestrator | 2026-02-05 00:31:52.339006 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-05 00:31:52.339012 | orchestrator | Thursday 05 February 2026 00:30:54 +0000 (0:00:00.389) 0:04:08.363 ***** 2026-02-05 00:31:52.339018 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-05 00:31:52.339023 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-05 00:31:52.339029 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-05 00:31:52.339035 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:31:52.339041 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-05 00:31:52.339049 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-05 00:31:52.339059 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-05 00:31:52.339071 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:31:52.339081 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-05 00:31:52.339091 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:31:52.339101 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-05 00:31:52.339112 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-05 00:31:52.339124 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-05 00:31:52.339134 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:31:52.339145 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-05 00:31:52.339152 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-05 00:31:52.339179 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:31:52.339189 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:31:52.339198 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-05 00:31:52.339208 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-05 00:31:52.339219 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:31:52.339229 | orchestrator | 2026-02-05 00:31:52.339239 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-05 00:31:52.339260 | orchestrator | Thursday 05 February 2026 00:30:55 +0000 (0:00:00.342) 0:04:08.706 ***** 2026-02-05 00:31:52.339267 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:31:52.339274 | orchestrator | 2026-02-05 00:31:52.339284 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-05 00:31:52.339293 | orchestrator | Thursday 05 February 2026 00:30:55 +0000 (0:00:00.447) 0:04:09.153 ***** 2026-02-05 00:31:52.339303 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-05 00:31:52.339313 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-05 00:31:52.339323 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:31:52.339333 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-05 00:31:52.339362 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:31:52.339373 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:31:52.339382 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-05 00:31:52.339398 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:31:52.339406 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-05 00:31:52.339414 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-05 00:31:52.339422 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:31:52.339435 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:31:52.339449 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-05 00:31:52.339457 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:31:52.339467 | orchestrator | 2026-02-05 00:31:52.339476 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-05 00:31:52.339485 | orchestrator | Thursday 05 February 2026 00:30:55 +0000 (0:00:00.338) 0:04:09.491 ***** 2026-02-05 00:31:52.339495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:31:52.339503 | orchestrator | 2026-02-05 00:31:52.339512 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-05 00:31:52.339521 | orchestrator | Thursday 05 February 2026 00:30:56 +0000 (0:00:00.376) 0:04:09.868 ***** 2026-02-05 00:31:52.339535 | orchestrator | changed: [testbed-manager] 2026-02-05 00:31:52.339545 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:31:52.339554 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:31:52.339563 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:31:52.339573 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:31:52.339584 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:31:52.339591 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:31:52.339597 | orchestrator | 2026-02-05 00:31:52.339603 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-05 00:31:52.339609 | orchestrator | Thursday 05 February 2026 00:31:28 +0000 (0:00:32.366) 0:04:42.234 ***** 2026-02-05 00:31:52.339615 | orchestrator | changed: [testbed-manager] 2026-02-05 00:31:52.339620 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:31:52.339626 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:31:52.339631 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:31:52.339637 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:31:52.339643 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:31:52.339648 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:31:52.339654 | orchestrator | 2026-02-05 00:31:52.339660 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-05 00:31:52.339665 | orchestrator | Thursday 05 February 2026 00:31:36 +0000 (0:00:08.195) 0:04:50.430 ***** 2026-02-05 00:31:52.339671 | orchestrator | changed: [testbed-manager] 2026-02-05 00:31:52.339677 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:31:52.339683 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:31:52.339688 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:31:52.339717 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:31:52.339723 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:31:52.339729 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:31:52.339734 | orchestrator | 2026-02-05 00:31:52.339740 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-05 00:31:52.339746 | orchestrator | Thursday 05 February 2026 00:31:44 +0000 (0:00:07.733) 0:04:58.164 ***** 2026-02-05 00:31:52.339752 | orchestrator | ok: [testbed-manager] 2026-02-05 00:31:52.339757 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:31:52.339763 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:31:52.339769 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:31:52.339774 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:31:52.339780 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:31:52.339786 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:31:52.339791 | orchestrator | 2026-02-05 00:31:52.339797 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-05 00:31:52.339810 | orchestrator | Thursday 05 February 2026 00:31:46 +0000 (0:00:01.876) 0:05:00.041 ***** 2026-02-05 00:31:52.339816 | orchestrator | changed: [testbed-manager] 2026-02-05 00:31:52.339822 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:31:52.339828 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:31:52.339833 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:31:52.339839 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:31:52.339845 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:31:52.339850 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:31:52.339856 | orchestrator | 2026-02-05 00:31:52.339869 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-05 00:32:03.279305 | orchestrator | Thursday 05 February 2026 00:31:52 +0000 (0:00:05.826) 0:05:05.868 ***** 2026-02-05 00:32:03.279459 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:32:03.279482 | orchestrator | 2026-02-05 00:32:03.279496 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-05 00:32:03.279509 | orchestrator | Thursday 05 February 2026 00:31:52 +0000 (0:00:00.401) 0:05:06.269 ***** 2026-02-05 00:32:03.279520 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:32:03.279532 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:32:03.279543 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:32:03.279554 | orchestrator | changed: [testbed-manager] 2026-02-05 00:32:03.279565 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:32:03.279576 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:32:03.279587 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:32:03.279598 | orchestrator | 2026-02-05 00:32:03.279609 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-05 00:32:03.279620 | orchestrator | Thursday 05 February 2026 00:31:53 +0000 (0:00:00.732) 0:05:07.001 ***** 2026-02-05 00:32:03.279631 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:03.279643 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:32:03.279655 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:32:03.279665 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:32:03.279676 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:32:03.279687 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:32:03.279726 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:32:03.279747 | orchestrator | 2026-02-05 00:32:03.279766 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-05 00:32:03.279788 | orchestrator | Thursday 05 February 2026 00:31:55 +0000 (0:00:01.785) 0:05:08.787 ***** 2026-02-05 00:32:03.279808 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:32:03.279823 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:32:03.279836 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:32:03.279849 | orchestrator | changed: [testbed-manager] 2026-02-05 00:32:03.279862 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:32:03.279875 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:32:03.279888 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:32:03.279901 | orchestrator | 2026-02-05 00:32:03.279913 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-05 00:32:03.279924 | orchestrator | Thursday 05 February 2026 00:31:56 +0000 (0:00:00.892) 0:05:09.680 ***** 2026-02-05 00:32:03.279935 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:32:03.279946 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:32:03.279957 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:32:03.279968 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:32:03.279979 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:32:03.279990 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:32:03.280001 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:32:03.280012 | orchestrator | 2026-02-05 00:32:03.280023 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-05 00:32:03.280051 | orchestrator | Thursday 05 February 2026 00:31:56 +0000 (0:00:00.274) 0:05:09.955 ***** 2026-02-05 00:32:03.280084 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:32:03.280096 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:32:03.280107 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:32:03.280117 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:32:03.280128 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:32:03.280139 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:32:03.280150 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:32:03.280160 | orchestrator | 2026-02-05 00:32:03.280171 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-05 00:32:03.280182 | orchestrator | Thursday 05 February 2026 00:31:56 +0000 (0:00:00.389) 0:05:10.344 ***** 2026-02-05 00:32:03.280193 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:32:03.280204 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:32:03.280215 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:32:03.280226 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:03.280237 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:32:03.280248 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:32:03.280258 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:32:03.280269 | orchestrator | 2026-02-05 00:32:03.280280 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-05 00:32:03.280291 | orchestrator | Thursday 05 February 2026 00:31:57 +0000 (0:00:00.269) 0:05:10.614 ***** 2026-02-05 00:32:03.280302 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:32:03.280313 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:32:03.280323 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:32:03.280334 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:32:03.280345 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:32:03.280356 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:32:03.280366 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:32:03.280377 | orchestrator | 2026-02-05 00:32:03.280388 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-05 00:32:03.280400 | orchestrator | Thursday 05 February 2026 00:31:57 +0000 (0:00:00.296) 0:05:10.910 ***** 2026-02-05 00:32:03.280411 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:32:03.280422 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:32:03.280432 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:32:03.280443 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:03.280454 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:32:03.280465 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:32:03.280475 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:32:03.280486 | orchestrator | 2026-02-05 00:32:03.280497 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-05 00:32:03.280508 | orchestrator | Thursday 05 February 2026 00:31:57 +0000 (0:00:00.266) 0:05:11.177 ***** 2026-02-05 00:32:03.280519 | orchestrator | ok: [testbed-node-3] =>  2026-02-05 00:32:03.280530 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 00:32:03.280541 | orchestrator | ok: [testbed-node-4] =>  2026-02-05 00:32:03.280552 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 00:32:03.280563 | orchestrator | ok: [testbed-node-5] =>  2026-02-05 00:32:03.280574 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 00:32:03.280585 | orchestrator | ok: [testbed-manager] =>  2026-02-05 00:32:03.280595 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 00:32:03.280625 | orchestrator | ok: [testbed-node-0] =>  2026-02-05 00:32:03.280637 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 00:32:03.280648 | orchestrator | ok: [testbed-node-1] =>  2026-02-05 00:32:03.280658 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 00:32:03.280669 | orchestrator | ok: [testbed-node-2] =>  2026-02-05 00:32:03.280680 | orchestrator |  docker_version: 5:27.5.1 2026-02-05 00:32:03.280691 | orchestrator | 2026-02-05 00:32:03.280750 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-05 00:32:03.280762 | orchestrator | Thursday 05 February 2026 00:31:57 +0000 (0:00:00.251) 0:05:11.428 ***** 2026-02-05 00:32:03.280773 | orchestrator | ok: [testbed-node-3] =>  2026-02-05 00:32:03.280794 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 00:32:03.280805 | orchestrator | ok: [testbed-node-4] =>  2026-02-05 00:32:03.280816 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 00:32:03.280826 | orchestrator | ok: [testbed-node-5] =>  2026-02-05 00:32:03.280837 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 00:32:03.280848 | orchestrator | ok: [testbed-manager] =>  2026-02-05 00:32:03.280858 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 00:32:03.280869 | orchestrator | ok: [testbed-node-0] =>  2026-02-05 00:32:03.280880 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 00:32:03.280890 | orchestrator | ok: [testbed-node-1] =>  2026-02-05 00:32:03.280901 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 00:32:03.280999 | orchestrator | ok: [testbed-node-2] =>  2026-02-05 00:32:03.281013 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-05 00:32:03.281024 | orchestrator | 2026-02-05 00:32:03.281035 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-05 00:32:03.281046 | orchestrator | Thursday 05 February 2026 00:31:58 +0000 (0:00:00.281) 0:05:11.710 ***** 2026-02-05 00:32:03.281057 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:32:03.281068 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:32:03.281079 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:32:03.281090 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:32:03.281101 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:32:03.281111 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:32:03.281122 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:32:03.281133 | orchestrator | 2026-02-05 00:32:03.281144 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-05 00:32:03.281155 | orchestrator | Thursday 05 February 2026 00:31:58 +0000 (0:00:00.247) 0:05:11.957 ***** 2026-02-05 00:32:03.281166 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:32:03.281177 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:32:03.281188 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:32:03.281199 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:32:03.281213 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:32:03.281231 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:32:03.281249 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:32:03.281268 | orchestrator | 2026-02-05 00:32:03.281288 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-05 00:32:03.281307 | orchestrator | Thursday 05 February 2026 00:31:58 +0000 (0:00:00.390) 0:05:12.348 ***** 2026-02-05 00:32:03.281337 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:32:03.281351 | orchestrator | 2026-02-05 00:32:03.281363 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-05 00:32:03.281374 | orchestrator | Thursday 05 February 2026 00:31:59 +0000 (0:00:00.409) 0:05:12.758 ***** 2026-02-05 00:32:03.281385 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:03.281396 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:32:03.281407 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:32:03.281418 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:32:03.281429 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:32:03.281439 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:32:03.281450 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:32:03.281461 | orchestrator | 2026-02-05 00:32:03.281491 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-05 00:32:03.281502 | orchestrator | Thursday 05 February 2026 00:32:00 +0000 (0:00:00.838) 0:05:13.596 ***** 2026-02-05 00:32:03.281513 | orchestrator | ok: [testbed-manager] 2026-02-05 00:32:03.281524 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:32:03.281535 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:32:03.281545 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:32:03.281556 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:32:03.281575 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:32:03.281587 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:32:03.281598 | orchestrator | 2026-02-05 00:32:03.281608 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-05 00:32:03.281621 | orchestrator | Thursday 05 February 2026 00:32:02 +0000 (0:00:02.875) 0:05:16.472 ***** 2026-02-05 00:32:03.281632 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-05 00:32:03.281643 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-05 00:32:03.281654 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-05 00:32:03.281666 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-05 00:32:03.281677 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-05 00:32:03.281687 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-05 00:32:03.281728 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:32:03.281740 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-05 00:32:03.281751 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-05 00:32:03.281799 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-05 00:32:03.281812 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:32:03.281823 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-05 00:32:03.281833 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-05 00:32:03.281844 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-05 00:32:03.281855 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:32:03.281866 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-05 00:32:03.281889 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-05 00:33:07.412104 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-05 00:33:07.412296 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:33:07.412320 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-05 00:33:07.412334 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-05 00:33:07.412348 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-05 00:33:07.412361 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:33:07.412375 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:33:07.412389 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-05 00:33:07.412403 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-05 00:33:07.412418 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-05 00:33:07.412431 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:33:07.412444 | orchestrator | 2026-02-05 00:33:07.412454 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-05 00:33:07.412464 | orchestrator | Thursday 05 February 2026 00:32:03 +0000 (0:00:00.541) 0:05:17.013 ***** 2026-02-05 00:33:07.412472 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:07.412480 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:07.412488 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:07.412500 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:07.412514 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:07.412528 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:07.412542 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:07.412557 | orchestrator | 2026-02-05 00:33:07.412570 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-05 00:33:07.412584 | orchestrator | Thursday 05 February 2026 00:32:10 +0000 (0:00:07.036) 0:05:24.050 ***** 2026-02-05 00:33:07.412598 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:07.412612 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:07.412625 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:07.412640 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:07.412653 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:07.412667 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:07.412712 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:07.412756 | orchestrator | 2026-02-05 00:33:07.412773 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-05 00:33:07.412787 | orchestrator | Thursday 05 February 2026 00:32:11 +0000 (0:00:01.111) 0:05:25.161 ***** 2026-02-05 00:33:07.412800 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:07.412814 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:07.412829 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:07.412842 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:07.412856 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:07.412866 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:07.412874 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:07.412882 | orchestrator | 2026-02-05 00:33:07.412890 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-05 00:33:07.412898 | orchestrator | Thursday 05 February 2026 00:32:20 +0000 (0:00:08.858) 0:05:34.020 ***** 2026-02-05 00:33:07.412906 | orchestrator | changed: [testbed-manager] 2026-02-05 00:33:07.412914 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:07.412922 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:07.412943 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:07.412951 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:07.412959 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:07.412967 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:07.412974 | orchestrator | 2026-02-05 00:33:07.412982 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-05 00:33:07.412990 | orchestrator | Thursday 05 February 2026 00:32:23 +0000 (0:00:03.104) 0:05:37.125 ***** 2026-02-05 00:33:07.413002 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:07.413028 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:07.413042 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:07.413067 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:07.413081 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:07.413091 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:07.413099 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:07.413107 | orchestrator | 2026-02-05 00:33:07.413115 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-05 00:33:07.413123 | orchestrator | Thursday 05 February 2026 00:32:25 +0000 (0:00:01.546) 0:05:38.672 ***** 2026-02-05 00:33:07.413131 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:07.413139 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:07.413146 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:07.413154 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:07.413162 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:07.413169 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:07.413177 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:07.413185 | orchestrator | 2026-02-05 00:33:07.413193 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-05 00:33:07.413201 | orchestrator | Thursday 05 February 2026 00:32:26 +0000 (0:00:01.449) 0:05:40.121 ***** 2026-02-05 00:33:07.413209 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:33:07.413216 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:33:07.413225 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:33:07.413232 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:33:07.413240 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:33:07.413248 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:33:07.413256 | orchestrator | changed: [testbed-manager] 2026-02-05 00:33:07.413264 | orchestrator | 2026-02-05 00:33:07.413272 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-05 00:33:07.413280 | orchestrator | Thursday 05 February 2026 00:32:27 +0000 (0:00:00.821) 0:05:40.942 ***** 2026-02-05 00:33:07.413288 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:07.413295 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:07.413303 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:07.413318 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:07.413326 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:07.413334 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:07.413342 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:07.413350 | orchestrator | 2026-02-05 00:33:07.413358 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-05 00:33:07.413400 | orchestrator | Thursday 05 February 2026 00:32:37 +0000 (0:00:09.763) 0:05:50.705 ***** 2026-02-05 00:33:07.413409 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:07.413417 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:07.413425 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:07.413432 | orchestrator | changed: [testbed-manager] 2026-02-05 00:33:07.413440 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:07.413448 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:07.413456 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:07.413463 | orchestrator | 2026-02-05 00:33:07.413471 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-05 00:33:07.413479 | orchestrator | Thursday 05 February 2026 00:32:38 +0000 (0:00:00.927) 0:05:51.633 ***** 2026-02-05 00:33:07.413487 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:07.413495 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:07.413502 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:07.413510 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:07.413518 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:07.413526 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:07.413533 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:07.413541 | orchestrator | 2026-02-05 00:33:07.413549 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-05 00:33:07.413557 | orchestrator | Thursday 05 February 2026 00:32:48 +0000 (0:00:10.461) 0:06:02.095 ***** 2026-02-05 00:33:07.413564 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:07.413572 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:07.413580 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:07.413588 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:07.413595 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:07.413603 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:07.413611 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:07.413619 | orchestrator | 2026-02-05 00:33:07.413626 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-05 00:33:07.413634 | orchestrator | Thursday 05 February 2026 00:33:00 +0000 (0:00:12.317) 0:06:14.412 ***** 2026-02-05 00:33:07.413642 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-05 00:33:07.413650 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-05 00:33:07.413658 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-05 00:33:07.413666 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-05 00:33:07.413674 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-05 00:33:07.413682 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-05 00:33:07.413689 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-05 00:33:07.413697 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-05 00:33:07.413705 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-05 00:33:07.413713 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-05 00:33:07.413721 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-05 00:33:07.413747 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-05 00:33:07.413763 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-05 00:33:07.413772 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-05 00:33:07.413779 | orchestrator | 2026-02-05 00:33:07.413788 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-05 00:33:07.413796 | orchestrator | Thursday 05 February 2026 00:33:02 +0000 (0:00:01.192) 0:06:15.605 ***** 2026-02-05 00:33:07.413804 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:33:07.413818 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:33:07.413826 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:33:07.413834 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:33:07.413842 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:33:07.413850 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:33:07.413857 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:33:07.413865 | orchestrator | 2026-02-05 00:33:07.413873 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-05 00:33:07.413881 | orchestrator | Thursday 05 February 2026 00:33:02 +0000 (0:00:00.503) 0:06:16.108 ***** 2026-02-05 00:33:07.413889 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:07.413897 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:07.413905 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:07.413912 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:07.413920 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:07.413928 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:07.413936 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:07.413944 | orchestrator | 2026-02-05 00:33:07.413952 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-05 00:33:07.413961 | orchestrator | Thursday 05 February 2026 00:33:06 +0000 (0:00:03.893) 0:06:20.001 ***** 2026-02-05 00:33:07.413970 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:33:07.413984 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:33:07.413997 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:33:07.414010 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:33:07.414069 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:33:07.414079 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:33:07.414086 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:33:07.414094 | orchestrator | 2026-02-05 00:33:07.414144 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-05 00:33:07.414153 | orchestrator | Thursday 05 February 2026 00:33:07 +0000 (0:00:00.673) 0:06:20.675 ***** 2026-02-05 00:33:07.414161 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-05 00:33:07.414169 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-05 00:33:07.414177 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:33:07.414185 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-05 00:33:07.414193 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-05 00:33:07.414201 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:33:07.414208 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-05 00:33:07.414216 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-05 00:33:07.414224 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:33:07.414240 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-05 00:33:26.651060 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-05 00:33:26.651138 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:33:26.651144 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-05 00:33:26.651150 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-05 00:33:26.651155 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:33:26.651159 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-05 00:33:26.651163 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-05 00:33:26.651168 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:33:26.651172 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-05 00:33:26.651176 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-05 00:33:26.651180 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:33:26.651185 | orchestrator | 2026-02-05 00:33:26.651191 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-05 00:33:26.651212 | orchestrator | Thursday 05 February 2026 00:33:07 +0000 (0:00:00.544) 0:06:21.220 ***** 2026-02-05 00:33:26.651216 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:33:26.651221 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:33:26.651225 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:33:26.651229 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:33:26.651233 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:33:26.651237 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:33:26.651241 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:33:26.651245 | orchestrator | 2026-02-05 00:33:26.651249 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-05 00:33:26.651253 | orchestrator | Thursday 05 February 2026 00:33:08 +0000 (0:00:00.485) 0:06:21.705 ***** 2026-02-05 00:33:26.651257 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:33:26.651261 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:33:26.651265 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:33:26.651269 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:33:26.651273 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:33:26.651277 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:33:26.651281 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:33:26.651285 | orchestrator | 2026-02-05 00:33:26.651289 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-05 00:33:26.651293 | orchestrator | Thursday 05 February 2026 00:33:08 +0000 (0:00:00.478) 0:06:22.184 ***** 2026-02-05 00:33:26.651297 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:33:26.651301 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:33:26.651305 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:33:26.651309 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:33:26.651313 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:33:26.651317 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:33:26.651321 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:33:26.651325 | orchestrator | 2026-02-05 00:33:26.651329 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-05 00:33:26.651333 | orchestrator | Thursday 05 February 2026 00:33:09 +0000 (0:00:00.492) 0:06:22.676 ***** 2026-02-05 00:33:26.651346 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:26.651351 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:26.651355 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:26.651358 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:26.651362 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:26.651366 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:26.651370 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:26.651374 | orchestrator | 2026-02-05 00:33:26.651378 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-05 00:33:26.651382 | orchestrator | Thursday 05 February 2026 00:33:11 +0000 (0:00:02.217) 0:06:24.894 ***** 2026-02-05 00:33:26.651387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:33:26.651393 | orchestrator | 2026-02-05 00:33:26.651397 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-05 00:33:26.651401 | orchestrator | Thursday 05 February 2026 00:33:12 +0000 (0:00:00.823) 0:06:25.717 ***** 2026-02-05 00:33:26.651405 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:26.651409 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:26.651412 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:26.651416 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:26.651420 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:26.651424 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:26.651428 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:26.651432 | orchestrator | 2026-02-05 00:33:26.651436 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-05 00:33:26.651444 | orchestrator | Thursday 05 February 2026 00:33:13 +0000 (0:00:00.842) 0:06:26.560 ***** 2026-02-05 00:33:26.651448 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:26.651452 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:26.651456 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:26.651460 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:26.651464 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:26.651468 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:26.651472 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:26.651476 | orchestrator | 2026-02-05 00:33:26.651480 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-05 00:33:26.651484 | orchestrator | Thursday 05 February 2026 00:33:14 +0000 (0:00:01.061) 0:06:27.622 ***** 2026-02-05 00:33:26.651488 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:26.651491 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:26.651495 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:26.651499 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:26.651503 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:26.651507 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:26.651511 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:26.651515 | orchestrator | 2026-02-05 00:33:26.651519 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-05 00:33:26.651531 | orchestrator | Thursday 05 February 2026 00:33:15 +0000 (0:00:01.399) 0:06:29.022 ***** 2026-02-05 00:33:26.651536 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:33:26.651540 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:26.651544 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:26.651547 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:26.651551 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:26.651555 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:26.651559 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:26.651563 | orchestrator | 2026-02-05 00:33:26.651567 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-05 00:33:26.651571 | orchestrator | Thursday 05 February 2026 00:33:16 +0000 (0:00:01.384) 0:06:30.407 ***** 2026-02-05 00:33:26.651575 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:26.651579 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:26.651583 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:26.651587 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:26.651591 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:26.651594 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:26.651598 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:26.651602 | orchestrator | 2026-02-05 00:33:26.651606 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-05 00:33:26.651610 | orchestrator | Thursday 05 February 2026 00:33:18 +0000 (0:00:01.293) 0:06:31.700 ***** 2026-02-05 00:33:26.651614 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:26.651618 | orchestrator | changed: [testbed-manager] 2026-02-05 00:33:26.651622 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:26.651626 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:26.651629 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:26.651634 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:26.651639 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:26.651643 | orchestrator | 2026-02-05 00:33:26.651648 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-05 00:33:26.651653 | orchestrator | Thursday 05 February 2026 00:33:19 +0000 (0:00:01.476) 0:06:33.177 ***** 2026-02-05 00:33:26.651657 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:33:26.651662 | orchestrator | 2026-02-05 00:33:26.651666 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-05 00:33:26.651671 | orchestrator | Thursday 05 February 2026 00:33:20 +0000 (0:00:00.984) 0:06:34.162 ***** 2026-02-05 00:33:26.651683 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:26.651688 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:26.651693 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:26.651697 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:26.651702 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:26.651707 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:26.651711 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:26.651716 | orchestrator | 2026-02-05 00:33:26.651721 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-05 00:33:26.651725 | orchestrator | Thursday 05 February 2026 00:33:22 +0000 (0:00:01.396) 0:06:35.558 ***** 2026-02-05 00:33:26.651730 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:26.651751 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:26.651755 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:26.651759 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:26.651763 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:26.651767 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:26.651771 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:26.651775 | orchestrator | 2026-02-05 00:33:26.651779 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-05 00:33:26.651783 | orchestrator | Thursday 05 February 2026 00:33:23 +0000 (0:00:01.104) 0:06:36.662 ***** 2026-02-05 00:33:26.651787 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:26.651790 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:26.651794 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:26.651798 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:26.651802 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:26.651806 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:26.651810 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:26.651814 | orchestrator | 2026-02-05 00:33:26.651818 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-05 00:33:26.651822 | orchestrator | Thursday 05 February 2026 00:33:24 +0000 (0:00:01.157) 0:06:37.819 ***** 2026-02-05 00:33:26.651825 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:26.651829 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:26.651833 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:26.651837 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:26.651841 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:26.651845 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:26.651849 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:26.651853 | orchestrator | 2026-02-05 00:33:26.651857 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-05 00:33:26.651861 | orchestrator | Thursday 05 February 2026 00:33:25 +0000 (0:00:01.333) 0:06:39.153 ***** 2026-02-05 00:33:26.651865 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:33:26.651869 | orchestrator | 2026-02-05 00:33:26.651873 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 00:33:26.651877 | orchestrator | Thursday 05 February 2026 00:33:26 +0000 (0:00:00.880) 0:06:40.034 ***** 2026-02-05 00:33:26.651881 | orchestrator | 2026-02-05 00:33:26.651885 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 00:33:26.651889 | orchestrator | Thursday 05 February 2026 00:33:26 +0000 (0:00:00.048) 0:06:40.083 ***** 2026-02-05 00:33:26.651892 | orchestrator | 2026-02-05 00:33:26.651896 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 00:33:26.651900 | orchestrator | Thursday 05 February 2026 00:33:26 +0000 (0:00:00.038) 0:06:40.122 ***** 2026-02-05 00:33:26.651904 | orchestrator | 2026-02-05 00:33:26.651908 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 00:33:26.651915 | orchestrator | Thursday 05 February 2026 00:33:26 +0000 (0:00:00.055) 0:06:40.177 ***** 2026-02-05 00:33:52.308071 | orchestrator | 2026-02-05 00:33:52.308193 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 00:33:52.308235 | orchestrator | Thursday 05 February 2026 00:33:26 +0000 (0:00:00.038) 0:06:40.216 ***** 2026-02-05 00:33:52.308248 | orchestrator | 2026-02-05 00:33:52.308259 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 00:33:52.308270 | orchestrator | Thursday 05 February 2026 00:33:26 +0000 (0:00:00.037) 0:06:40.253 ***** 2026-02-05 00:33:52.308281 | orchestrator | 2026-02-05 00:33:52.308293 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-05 00:33:52.308304 | orchestrator | Thursday 05 February 2026 00:33:26 +0000 (0:00:00.042) 0:06:40.296 ***** 2026-02-05 00:33:52.308315 | orchestrator | 2026-02-05 00:33:52.308326 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-05 00:33:52.308337 | orchestrator | Thursday 05 February 2026 00:33:26 +0000 (0:00:00.037) 0:06:40.333 ***** 2026-02-05 00:33:52.308349 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:52.308362 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:52.308384 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:52.308404 | orchestrator | 2026-02-05 00:33:52.308425 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-05 00:33:52.308444 | orchestrator | Thursday 05 February 2026 00:33:28 +0000 (0:00:01.205) 0:06:41.538 ***** 2026-02-05 00:33:52.308466 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:52.308489 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:52.308509 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:52.308530 | orchestrator | changed: [testbed-manager] 2026-02-05 00:33:52.308550 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:52.308562 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:52.308573 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:52.308584 | orchestrator | 2026-02-05 00:33:52.308595 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-05 00:33:52.308606 | orchestrator | Thursday 05 February 2026 00:33:29 +0000 (0:00:01.522) 0:06:43.061 ***** 2026-02-05 00:33:52.308617 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:52.308628 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:52.308638 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:52.308649 | orchestrator | changed: [testbed-manager] 2026-02-05 00:33:52.308660 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:52.308671 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:52.308681 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:52.308692 | orchestrator | 2026-02-05 00:33:52.308703 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-05 00:33:52.308714 | orchestrator | Thursday 05 February 2026 00:33:30 +0000 (0:00:01.200) 0:06:44.261 ***** 2026-02-05 00:33:52.308725 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:33:52.308735 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:52.308777 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:52.308789 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:52.308800 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:52.308811 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:52.308821 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:52.308832 | orchestrator | 2026-02-05 00:33:52.308859 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-05 00:33:52.308870 | orchestrator | Thursday 05 February 2026 00:33:32 +0000 (0:00:02.275) 0:06:46.537 ***** 2026-02-05 00:33:52.308881 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:33:52.308892 | orchestrator | 2026-02-05 00:33:52.308903 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-05 00:33:52.308914 | orchestrator | Thursday 05 February 2026 00:33:33 +0000 (0:00:00.113) 0:06:46.650 ***** 2026-02-05 00:33:52.308924 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:52.308935 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:52.308946 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:33:52.308957 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:52.308969 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:52.308989 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:52.309000 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:52.309011 | orchestrator | 2026-02-05 00:33:52.309022 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-05 00:33:52.309033 | orchestrator | Thursday 05 February 2026 00:33:34 +0000 (0:00:00.984) 0:06:47.635 ***** 2026-02-05 00:33:52.309044 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:33:52.309061 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:33:52.309078 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:33:52.309089 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:33:52.309100 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:33:52.309110 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:33:52.309121 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:33:52.309132 | orchestrator | 2026-02-05 00:33:52.309143 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-05 00:33:52.309154 | orchestrator | Thursday 05 February 2026 00:33:34 +0000 (0:00:00.714) 0:06:48.349 ***** 2026-02-05 00:33:52.309165 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:33:52.309178 | orchestrator | 2026-02-05 00:33:52.309189 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-05 00:33:52.309200 | orchestrator | Thursday 05 February 2026 00:33:35 +0000 (0:00:00.877) 0:06:49.227 ***** 2026-02-05 00:33:52.309211 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:52.309222 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:52.309233 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:52.309243 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:52.309254 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:52.309265 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:52.309275 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:52.309286 | orchestrator | 2026-02-05 00:33:52.309297 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-05 00:33:52.309308 | orchestrator | Thursday 05 February 2026 00:33:36 +0000 (0:00:00.848) 0:06:50.075 ***** 2026-02-05 00:33:52.309319 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-05 00:33:52.309348 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-05 00:33:52.309361 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-05 00:33:52.309380 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-05 00:33:52.309398 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-05 00:33:52.309416 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-05 00:33:52.309435 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-05 00:33:52.309454 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-05 00:33:52.309472 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-05 00:33:52.309490 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-05 00:33:52.309507 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-05 00:33:52.309518 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-05 00:33:52.309531 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-05 00:33:52.309548 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-05 00:33:52.309560 | orchestrator | 2026-02-05 00:33:52.309570 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-05 00:33:52.309581 | orchestrator | Thursday 05 February 2026 00:33:39 +0000 (0:00:02.603) 0:06:52.678 ***** 2026-02-05 00:33:52.309592 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:33:52.309603 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:33:52.309614 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:33:52.309624 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:33:52.309643 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:33:52.309654 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:33:52.309665 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:33:52.309676 | orchestrator | 2026-02-05 00:33:52.309694 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-05 00:33:52.309712 | orchestrator | Thursday 05 February 2026 00:33:39 +0000 (0:00:00.497) 0:06:53.175 ***** 2026-02-05 00:33:52.309731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:33:52.309777 | orchestrator | 2026-02-05 00:33:52.309794 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-05 00:33:52.309805 | orchestrator | Thursday 05 February 2026 00:33:40 +0000 (0:00:00.783) 0:06:53.958 ***** 2026-02-05 00:33:52.309816 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:52.309827 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:52.309849 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:52.309860 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:52.309871 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:52.309882 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:52.309892 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:52.309903 | orchestrator | 2026-02-05 00:33:52.309923 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-05 00:33:52.309934 | orchestrator | Thursday 05 February 2026 00:33:41 +0000 (0:00:00.809) 0:06:54.767 ***** 2026-02-05 00:33:52.309945 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:52.309956 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:52.309966 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:52.309977 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:52.309987 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:52.309998 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:52.310008 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:52.310083 | orchestrator | 2026-02-05 00:33:52.310096 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-05 00:33:52.310106 | orchestrator | Thursday 05 February 2026 00:33:42 +0000 (0:00:00.979) 0:06:55.747 ***** 2026-02-05 00:33:52.310117 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:33:52.310128 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:33:52.310139 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:33:52.310150 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:33:52.310161 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:33:52.310172 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:33:52.310182 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:33:52.310193 | orchestrator | 2026-02-05 00:33:52.310204 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-05 00:33:52.310215 | orchestrator | Thursday 05 February 2026 00:33:42 +0000 (0:00:00.461) 0:06:56.209 ***** 2026-02-05 00:33:52.310226 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:33:52.310236 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:52.310247 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:33:52.310258 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:33:52.310268 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:33:52.310279 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:33:52.310290 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:33:52.310300 | orchestrator | 2026-02-05 00:33:52.310311 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-05 00:33:52.310322 | orchestrator | Thursday 05 February 2026 00:33:44 +0000 (0:00:01.452) 0:06:57.661 ***** 2026-02-05 00:33:52.310333 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:33:52.310344 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:33:52.310354 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:33:52.310371 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:33:52.310392 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:33:52.310424 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:33:52.310447 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:33:52.310468 | orchestrator | 2026-02-05 00:33:52.310490 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-05 00:33:52.310511 | orchestrator | Thursday 05 February 2026 00:33:44 +0000 (0:00:00.478) 0:06:58.140 ***** 2026-02-05 00:33:52.310532 | orchestrator | ok: [testbed-manager] 2026-02-05 00:33:52.310547 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:33:52.310558 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:33:52.310569 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:33:52.310580 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:33:52.310590 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:33:52.310612 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:34:22.917904 | orchestrator | 2026-02-05 00:34:22.918083 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-05 00:34:22.918118 | orchestrator | Thursday 05 February 2026 00:33:52 +0000 (0:00:07.755) 0:07:05.895 ***** 2026-02-05 00:34:22.918141 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:34:22.918164 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:34:22.918183 | orchestrator | ok: [testbed-manager] 2026-02-05 00:34:22.918196 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:34:22.918208 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:34:22.918219 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:34:22.918230 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:34:22.918242 | orchestrator | 2026-02-05 00:34:22.918253 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-05 00:34:22.918264 | orchestrator | Thursday 05 February 2026 00:33:53 +0000 (0:00:01.326) 0:07:07.221 ***** 2026-02-05 00:34:22.918276 | orchestrator | ok: [testbed-manager] 2026-02-05 00:34:22.918287 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:34:22.918298 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:34:22.918309 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:34:22.918320 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:34:22.918331 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:34:22.918342 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:34:22.918353 | orchestrator | 2026-02-05 00:34:22.918364 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-05 00:34:22.918375 | orchestrator | Thursday 05 February 2026 00:33:55 +0000 (0:00:01.725) 0:07:08.946 ***** 2026-02-05 00:34:22.918386 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:34:22.918397 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:34:22.918408 | orchestrator | ok: [testbed-manager] 2026-02-05 00:34:22.918419 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:34:22.918430 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:34:22.918440 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:34:22.918451 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:34:22.918462 | orchestrator | 2026-02-05 00:34:22.918473 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-05 00:34:22.918484 | orchestrator | Thursday 05 February 2026 00:33:57 +0000 (0:00:01.764) 0:07:10.711 ***** 2026-02-05 00:34:22.918495 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:34:22.918506 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:34:22.918517 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:34:22.918528 | orchestrator | ok: [testbed-manager] 2026-02-05 00:34:22.918539 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:34:22.918550 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:34:22.918561 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:34:22.918571 | orchestrator | 2026-02-05 00:34:22.918582 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-05 00:34:22.918593 | orchestrator | Thursday 05 February 2026 00:33:58 +0000 (0:00:01.091) 0:07:11.802 ***** 2026-02-05 00:34:22.918604 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:34:22.918615 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:34:22.918626 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:34:22.918663 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:34:22.918674 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:34:22.918685 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:34:22.918696 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:34:22.918707 | orchestrator | 2026-02-05 00:34:22.918719 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-05 00:34:22.918730 | orchestrator | Thursday 05 February 2026 00:33:59 +0000 (0:00:00.778) 0:07:12.580 ***** 2026-02-05 00:34:22.918741 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:34:22.918803 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:34:22.918815 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:34:22.918826 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:34:22.918837 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:34:22.918847 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:34:22.918858 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:34:22.918868 | orchestrator | 2026-02-05 00:34:22.918879 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-05 00:34:22.918890 | orchestrator | Thursday 05 February 2026 00:33:59 +0000 (0:00:00.533) 0:07:13.113 ***** 2026-02-05 00:34:22.918901 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:34:22.918911 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:34:22.918922 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:34:22.918933 | orchestrator | ok: [testbed-manager] 2026-02-05 00:34:22.918944 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:34:22.918954 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:34:22.918965 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:34:22.918976 | orchestrator | 2026-02-05 00:34:22.918987 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-05 00:34:22.918997 | orchestrator | Thursday 05 February 2026 00:34:00 +0000 (0:00:00.506) 0:07:13.620 ***** 2026-02-05 00:34:22.919008 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:34:22.919019 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:34:22.919029 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:34:22.919040 | orchestrator | ok: [testbed-manager] 2026-02-05 00:34:22.919050 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:34:22.919061 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:34:22.919072 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:34:22.919082 | orchestrator | 2026-02-05 00:34:22.919093 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-05 00:34:22.919104 | orchestrator | Thursday 05 February 2026 00:34:00 +0000 (0:00:00.668) 0:07:14.289 ***** 2026-02-05 00:34:22.919115 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:34:22.919125 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:34:22.919136 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:34:22.919147 | orchestrator | ok: [testbed-manager] 2026-02-05 00:34:22.919158 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:34:22.919168 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:34:22.919179 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:34:22.919190 | orchestrator | 2026-02-05 00:34:22.919201 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-05 00:34:22.919211 | orchestrator | Thursday 05 February 2026 00:34:01 +0000 (0:00:00.505) 0:07:14.795 ***** 2026-02-05 00:34:22.919222 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:34:22.919233 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:34:22.919243 | orchestrator | ok: [testbed-manager] 2026-02-05 00:34:22.919254 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:34:22.919265 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:34:22.919275 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:34:22.919303 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:34:22.919315 | orchestrator | 2026-02-05 00:34:22.919347 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-05 00:34:22.919358 | orchestrator | Thursday 05 February 2026 00:34:06 +0000 (0:00:04.841) 0:07:19.637 ***** 2026-02-05 00:34:22.919369 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:34:22.919380 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:34:22.919401 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:34:22.919413 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:34:22.919423 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:34:22.919434 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:34:22.919445 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:34:22.919455 | orchestrator | 2026-02-05 00:34:22.919466 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-05 00:34:22.919477 | orchestrator | Thursday 05 February 2026 00:34:06 +0000 (0:00:00.429) 0:07:20.067 ***** 2026-02-05 00:34:22.919489 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:34:22.919502 | orchestrator | 2026-02-05 00:34:22.919513 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-05 00:34:22.919524 | orchestrator | Thursday 05 February 2026 00:34:07 +0000 (0:00:00.830) 0:07:20.898 ***** 2026-02-05 00:34:22.919535 | orchestrator | ok: [testbed-manager] 2026-02-05 00:34:22.919546 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:34:22.919557 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:34:22.919567 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:34:22.919578 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:34:22.919588 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:34:22.919599 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:34:22.919610 | orchestrator | 2026-02-05 00:34:22.919621 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-05 00:34:22.919632 | orchestrator | Thursday 05 February 2026 00:34:09 +0000 (0:00:01.888) 0:07:22.786 ***** 2026-02-05 00:34:22.919643 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:34:22.919653 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:34:22.919664 | orchestrator | ok: [testbed-manager] 2026-02-05 00:34:22.919674 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:34:22.919685 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:34:22.919695 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:34:22.919706 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:34:22.919717 | orchestrator | 2026-02-05 00:34:22.919728 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-05 00:34:22.919739 | orchestrator | Thursday 05 February 2026 00:34:10 +0000 (0:00:01.114) 0:07:23.900 ***** 2026-02-05 00:34:22.919772 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:34:22.919790 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:34:22.919801 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:34:22.919812 | orchestrator | ok: [testbed-manager] 2026-02-05 00:34:22.919887 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:34:22.919900 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:34:22.919911 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:34:22.919921 | orchestrator | 2026-02-05 00:34:22.919932 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-05 00:34:22.919950 | orchestrator | Thursday 05 February 2026 00:34:11 +0000 (0:00:00.873) 0:07:24.774 ***** 2026-02-05 00:34:22.919961 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 00:34:22.919974 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 00:34:22.919985 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 00:34:22.919996 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 00:34:22.920007 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 00:34:22.920018 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 00:34:22.920037 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-05 00:34:22.920048 | orchestrator | 2026-02-05 00:34:22.920059 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-05 00:34:22.920071 | orchestrator | Thursday 05 February 2026 00:34:13 +0000 (0:00:01.883) 0:07:26.657 ***** 2026-02-05 00:34:22.920082 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:34:22.920093 | orchestrator | 2026-02-05 00:34:22.920104 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-05 00:34:22.920115 | orchestrator | Thursday 05 February 2026 00:34:13 +0000 (0:00:00.759) 0:07:27.417 ***** 2026-02-05 00:34:22.920126 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:34:22.920137 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:34:22.920147 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:34:22.920158 | orchestrator | changed: [testbed-manager] 2026-02-05 00:34:22.920169 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:34:22.920180 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:34:22.920191 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:34:22.920201 | orchestrator | 2026-02-05 00:34:22.920221 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-05 00:34:52.100274 | orchestrator | Thursday 05 February 2026 00:34:22 +0000 (0:00:09.032) 0:07:36.449 ***** 2026-02-05 00:34:52.100389 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:34:52.100407 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:34:52.100419 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:34:52.100430 | orchestrator | ok: [testbed-manager] 2026-02-05 00:34:52.100441 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:34:52.100452 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:34:52.100463 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:34:52.100474 | orchestrator | 2026-02-05 00:34:52.100487 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-05 00:34:52.100499 | orchestrator | Thursday 05 February 2026 00:34:24 +0000 (0:00:01.893) 0:07:38.343 ***** 2026-02-05 00:34:52.100510 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:34:52.100520 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:34:52.100531 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:34:52.100542 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:34:52.100553 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:34:52.100564 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:34:52.100575 | orchestrator | 2026-02-05 00:34:52.100586 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-05 00:34:52.100597 | orchestrator | Thursday 05 February 2026 00:34:26 +0000 (0:00:01.281) 0:07:39.624 ***** 2026-02-05 00:34:52.100608 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:34:52.100619 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:34:52.100665 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:34:52.100678 | orchestrator | changed: [testbed-manager] 2026-02-05 00:34:52.100689 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:34:52.100700 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:34:52.100711 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:34:52.100722 | orchestrator | 2026-02-05 00:34:52.100734 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-05 00:34:52.100747 | orchestrator | 2026-02-05 00:34:52.100793 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-05 00:34:52.100809 | orchestrator | Thursday 05 February 2026 00:34:27 +0000 (0:00:01.470) 0:07:41.095 ***** 2026-02-05 00:34:52.100822 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:34:52.100834 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:34:52.100874 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:34:52.100887 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:34:52.100900 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:34:52.100912 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:34:52.100925 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:34:52.100937 | orchestrator | 2026-02-05 00:34:52.100950 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-05 00:34:52.100963 | orchestrator | 2026-02-05 00:34:52.100975 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-05 00:34:52.100987 | orchestrator | Thursday 05 February 2026 00:34:28 +0000 (0:00:00.482) 0:07:41.577 ***** 2026-02-05 00:34:52.101000 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:34:52.101013 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:34:52.101025 | orchestrator | changed: [testbed-manager] 2026-02-05 00:34:52.101037 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:34:52.101050 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:34:52.101062 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:34:52.101091 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:34:52.101102 | orchestrator | 2026-02-05 00:34:52.101113 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-05 00:34:52.101124 | orchestrator | Thursday 05 February 2026 00:34:29 +0000 (0:00:01.356) 0:07:42.934 ***** 2026-02-05 00:34:52.101135 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:34:52.101146 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:34:52.101157 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:34:52.101167 | orchestrator | ok: [testbed-manager] 2026-02-05 00:34:52.101178 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:34:52.101189 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:34:52.101200 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:34:52.101210 | orchestrator | 2026-02-05 00:34:52.101221 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-05 00:34:52.101232 | orchestrator | Thursday 05 February 2026 00:34:30 +0000 (0:00:01.426) 0:07:44.361 ***** 2026-02-05 00:34:52.101243 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:34:52.101253 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:34:52.101264 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:34:52.101275 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:34:52.101286 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:34:52.101296 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:34:52.101307 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:34:52.101318 | orchestrator | 2026-02-05 00:34:52.101329 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-05 00:34:52.101339 | orchestrator | Thursday 05 February 2026 00:34:31 +0000 (0:00:00.658) 0:07:45.019 ***** 2026-02-05 00:34:52.101381 | orchestrator | included: osism.services.smartd for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:34:52.101395 | orchestrator | 2026-02-05 00:34:52.101407 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-05 00:34:52.101417 | orchestrator | Thursday 05 February 2026 00:34:32 +0000 (0:00:00.763) 0:07:45.783 ***** 2026-02-05 00:34:52.101430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:34:52.101443 | orchestrator | 2026-02-05 00:34:52.101455 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-05 00:34:52.101466 | orchestrator | Thursday 05 February 2026 00:34:33 +0000 (0:00:00.764) 0:07:46.548 ***** 2026-02-05 00:34:52.101477 | orchestrator | changed: [testbed-manager] 2026-02-05 00:34:52.101488 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:34:52.101498 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:34:52.101509 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:34:52.101520 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:34:52.101539 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:34:52.101550 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:34:52.101561 | orchestrator | 2026-02-05 00:34:52.101592 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-05 00:34:52.101603 | orchestrator | Thursday 05 February 2026 00:34:41 +0000 (0:00:08.566) 0:07:55.115 ***** 2026-02-05 00:34:52.101614 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:34:52.101625 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:34:52.101636 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:34:52.101647 | orchestrator | changed: [testbed-manager] 2026-02-05 00:34:52.101657 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:34:52.101668 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:34:52.101679 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:34:52.101690 | orchestrator | 2026-02-05 00:34:52.101704 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-05 00:34:52.101723 | orchestrator | Thursday 05 February 2026 00:34:42 +0000 (0:00:00.826) 0:07:55.941 ***** 2026-02-05 00:34:52.101735 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:34:52.101746 | orchestrator | changed: [testbed-manager] 2026-02-05 00:34:52.101782 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:34:52.101794 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:34:52.101804 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:34:52.101815 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:34:52.101825 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:34:52.101836 | orchestrator | 2026-02-05 00:34:52.101847 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-05 00:34:52.101858 | orchestrator | Thursday 05 February 2026 00:34:43 +0000 (0:00:01.357) 0:07:57.299 ***** 2026-02-05 00:34:52.101869 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:34:52.101879 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:34:52.101890 | orchestrator | changed: [testbed-manager] 2026-02-05 00:34:52.101901 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:34:52.101911 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:34:52.101922 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:34:52.101933 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:34:52.101943 | orchestrator | 2026-02-05 00:34:52.101954 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-05 00:34:52.101965 | orchestrator | Thursday 05 February 2026 00:34:45 +0000 (0:00:01.958) 0:07:59.257 ***** 2026-02-05 00:34:52.101976 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:34:52.101987 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:34:52.101997 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:34:52.102008 | orchestrator | changed: [testbed-manager] 2026-02-05 00:34:52.102075 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:34:52.102087 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:34:52.102098 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:34:52.102108 | orchestrator | 2026-02-05 00:34:52.102119 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-05 00:34:52.102130 | orchestrator | Thursday 05 February 2026 00:34:46 +0000 (0:00:01.217) 0:08:00.475 ***** 2026-02-05 00:34:52.102142 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:34:52.102152 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:34:52.102163 | orchestrator | changed: [testbed-manager] 2026-02-05 00:34:52.102174 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:34:52.102185 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:34:52.102195 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:34:52.102212 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:34:52.102224 | orchestrator | 2026-02-05 00:34:52.102235 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-05 00:34:52.102246 | orchestrator | 2026-02-05 00:34:52.102256 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-05 00:34:52.102267 | orchestrator | Thursday 05 February 2026 00:34:47 +0000 (0:00:01.060) 0:08:01.535 ***** 2026-02-05 00:34:52.102286 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:34:52.102297 | orchestrator | 2026-02-05 00:34:52.102308 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-05 00:34:52.102319 | orchestrator | Thursday 05 February 2026 00:34:48 +0000 (0:00:00.779) 0:08:02.315 ***** 2026-02-05 00:34:52.102330 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:34:52.102341 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:34:52.102351 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:34:52.102362 | orchestrator | ok: [testbed-manager] 2026-02-05 00:34:52.102373 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:34:52.102383 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:34:52.102394 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:34:52.102405 | orchestrator | 2026-02-05 00:34:52.102416 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-05 00:34:52.102427 | orchestrator | Thursday 05 February 2026 00:34:49 +0000 (0:00:00.744) 0:08:03.060 ***** 2026-02-05 00:34:52.102437 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:34:52.102448 | orchestrator | changed: [testbed-manager] 2026-02-05 00:34:52.102459 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:34:52.102470 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:34:52.102480 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:34:52.102491 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:34:52.102502 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:34:52.102512 | orchestrator | 2026-02-05 00:34:52.102523 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-05 00:34:52.102534 | orchestrator | Thursday 05 February 2026 00:34:50 +0000 (0:00:01.014) 0:08:04.074 ***** 2026-02-05 00:34:52.102545 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:34:52.102555 | orchestrator | 2026-02-05 00:34:52.102567 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-05 00:34:52.102577 | orchestrator | Thursday 05 February 2026 00:34:51 +0000 (0:00:00.807) 0:08:04.882 ***** 2026-02-05 00:34:52.102588 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:34:52.102599 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:34:52.102610 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:34:52.102620 | orchestrator | ok: [testbed-manager] 2026-02-05 00:34:52.102631 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:34:52.102642 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:34:52.102653 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:34:52.102663 | orchestrator | 2026-02-05 00:34:52.102682 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-05 00:34:53.530531 | orchestrator | Thursday 05 February 2026 00:34:52 +0000 (0:00:00.747) 0:08:05.629 ***** 2026-02-05 00:34:53.530626 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:34:53.530635 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:34:53.530641 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:34:53.530648 | orchestrator | changed: [testbed-manager] 2026-02-05 00:34:53.530657 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:34:53.530665 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:34:53.530673 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:34:53.530681 | orchestrator | 2026-02-05 00:34:53.530690 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:34:53.530700 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-05 00:34:53.530712 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-05 00:34:53.530720 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-05 00:34:53.530749 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-05 00:34:53.530787 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-05 00:34:53.530794 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-05 00:34:53.530799 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-05 00:34:53.530804 | orchestrator | 2026-02-05 00:34:53.530809 | orchestrator | 2026-02-05 00:34:53.530815 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:34:53.530820 | orchestrator | Thursday 05 February 2026 00:34:53 +0000 (0:00:01.082) 0:08:06.711 ***** 2026-02-05 00:34:53.530825 | orchestrator | =============================================================================== 2026-02-05 00:34:53.530830 | orchestrator | osism.commons.packages : Install required packages --------------------- 80.32s 2026-02-05 00:34:53.530835 | orchestrator | osism.commons.packages : Download required packages -------------------- 43.53s 2026-02-05 00:34:53.530840 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 32.37s 2026-02-05 00:34:53.530857 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.97s 2026-02-05 00:34:53.530862 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.32s 2026-02-05 00:34:53.530867 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.12s 2026-02-05 00:34:53.530872 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 10.46s 2026-02-05 00:34:53.530877 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.76s 2026-02-05 00:34:53.530882 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.03s 2026-02-05 00:34:53.530887 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.86s 2026-02-05 00:34:53.530892 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.84s 2026-02-05 00:34:53.530897 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.57s 2026-02-05 00:34:53.530902 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.20s 2026-02-05 00:34:53.530908 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.76s 2026-02-05 00:34:53.530913 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.73s 2026-02-05 00:34:53.530918 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.04s 2026-02-05 00:34:53.530923 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.83s 2026-02-05 00:34:53.530928 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.06s 2026-02-05 00:34:53.530933 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 4.98s 2026-02-05 00:34:53.530938 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 4.84s 2026-02-05 00:34:53.829202 | orchestrator | + osism apply fail2ban 2026-02-05 00:35:05.842326 | orchestrator | 2026-02-05 00:35:05 | INFO  | Prepare task for execution of fail2ban. 2026-02-05 00:35:05.908010 | orchestrator | 2026-02-05 00:35:05 | INFO  | Task fabe607a-a1e0-4ae4-a4c8-2a7786e1b676 (fail2ban) was prepared for execution. 2026-02-05 00:35:05.908128 | orchestrator | 2026-02-05 00:35:05 | INFO  | It takes a moment until task fabe607a-a1e0-4ae4-a4c8-2a7786e1b676 (fail2ban) has been started and output is visible here. 2026-02-05 00:35:26.448072 | orchestrator | 2026-02-05 00:35:26.448223 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-05 00:35:26.448276 | orchestrator | 2026-02-05 00:35:26.448289 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-05 00:35:26.448300 | orchestrator | Thursday 05 February 2026 00:35:09 +0000 (0:00:00.227) 0:00:00.227 ***** 2026-02-05 00:35:26.448314 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:35:26.448327 | orchestrator | 2026-02-05 00:35:26.448339 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-05 00:35:26.448350 | orchestrator | Thursday 05 February 2026 00:35:10 +0000 (0:00:00.982) 0:00:01.210 ***** 2026-02-05 00:35:26.448361 | orchestrator | changed: [testbed-manager] 2026-02-05 00:35:26.448372 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:35:26.448383 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:35:26.448394 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:35:26.448405 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:35:26.448415 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:35:26.448426 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:35:26.448437 | orchestrator | 2026-02-05 00:35:26.448447 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-05 00:35:26.448458 | orchestrator | Thursday 05 February 2026 00:35:21 +0000 (0:00:10.866) 0:00:12.076 ***** 2026-02-05 00:35:26.448469 | orchestrator | changed: [testbed-manager] 2026-02-05 00:35:26.448480 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:35:26.448491 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:35:26.448501 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:35:26.448512 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:35:26.448523 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:35:26.448533 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:35:26.448544 | orchestrator | 2026-02-05 00:35:26.448555 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-05 00:35:26.448566 | orchestrator | Thursday 05 February 2026 00:35:23 +0000 (0:00:01.456) 0:00:13.532 ***** 2026-02-05 00:35:26.448577 | orchestrator | ok: [testbed-manager] 2026-02-05 00:35:26.448588 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:35:26.448599 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:35:26.448613 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:35:26.448625 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:35:26.448637 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:35:26.448649 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:35:26.448662 | orchestrator | 2026-02-05 00:35:26.448674 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-05 00:35:26.448687 | orchestrator | Thursday 05 February 2026 00:35:24 +0000 (0:00:01.449) 0:00:14.981 ***** 2026-02-05 00:35:26.448701 | orchestrator | changed: [testbed-manager] 2026-02-05 00:35:26.448713 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:35:26.448724 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:35:26.448735 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:35:26.448774 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:35:26.448794 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:35:26.448811 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:35:26.448823 | orchestrator | 2026-02-05 00:35:26.448833 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:35:26.448860 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:35:26.448873 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:35:26.448884 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:35:26.448895 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:35:26.448915 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:35:26.448926 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:35:26.448937 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:35:26.448948 | orchestrator | 2026-02-05 00:35:26.448959 | orchestrator | 2026-02-05 00:35:26.448969 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:35:26.448980 | orchestrator | Thursday 05 February 2026 00:35:26 +0000 (0:00:01.587) 0:00:16.569 ***** 2026-02-05 00:35:26.448991 | orchestrator | =============================================================================== 2026-02-05 00:35:26.449002 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 10.87s 2026-02-05 00:35:26.449013 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.59s 2026-02-05 00:35:26.449023 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.46s 2026-02-05 00:35:26.449038 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.45s 2026-02-05 00:35:26.449056 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 0.98s 2026-02-05 00:35:26.734479 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-05 00:35:26.735390 | orchestrator | + osism apply network 2026-02-05 00:35:38.533290 | orchestrator | 2026-02-05 00:35:38 | INFO  | Prepare task for execution of network. 2026-02-05 00:35:38.608805 | orchestrator | 2026-02-05 00:35:38 | INFO  | Task 933636d2-76b6-4d18-92c2-3cf9120f92f4 (network) was prepared for execution. 2026-02-05 00:35:38.608904 | orchestrator | 2026-02-05 00:35:38 | INFO  | It takes a moment until task 933636d2-76b6-4d18-92c2-3cf9120f92f4 (network) has been started and output is visible here. 2026-02-05 00:36:06.085297 | orchestrator | 2026-02-05 00:36:06.085417 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-05 00:36:06.085441 | orchestrator | 2026-02-05 00:36:06.085461 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-05 00:36:06.085481 | orchestrator | Thursday 05 February 2026 00:35:42 +0000 (0:00:00.253) 0:00:00.253 ***** 2026-02-05 00:36:06.085501 | orchestrator | ok: [testbed-manager] 2026-02-05 00:36:06.085522 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:36:06.085540 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:36:06.085558 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:36:06.085575 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:36:06.085593 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:36:06.085611 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:36:06.085630 | orchestrator | 2026-02-05 00:36:06.085651 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-05 00:36:06.085671 | orchestrator | Thursday 05 February 2026 00:35:43 +0000 (0:00:00.680) 0:00:00.933 ***** 2026-02-05 00:36:06.085695 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:36:06.085717 | orchestrator | 2026-02-05 00:36:06.085778 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-05 00:36:06.085799 | orchestrator | Thursday 05 February 2026 00:35:44 +0000 (0:00:01.175) 0:00:02.108 ***** 2026-02-05 00:36:06.085820 | orchestrator | ok: [testbed-manager] 2026-02-05 00:36:06.085843 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:36:06.085868 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:36:06.085886 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:36:06.085907 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:36:06.085955 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:36:06.085976 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:36:06.085995 | orchestrator | 2026-02-05 00:36:06.086087 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-05 00:36:06.086112 | orchestrator | Thursday 05 February 2026 00:35:46 +0000 (0:00:02.053) 0:00:04.161 ***** 2026-02-05 00:36:06.086134 | orchestrator | ok: [testbed-manager] 2026-02-05 00:36:06.086157 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:36:06.086178 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:36:06.086198 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:36:06.086218 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:36:06.086237 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:36:06.086256 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:36:06.086276 | orchestrator | 2026-02-05 00:36:06.086296 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-05 00:36:06.086314 | orchestrator | Thursday 05 February 2026 00:35:48 +0000 (0:00:01.890) 0:00:06.052 ***** 2026-02-05 00:36:06.086332 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-05 00:36:06.086351 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-05 00:36:06.086368 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-05 00:36:06.086385 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-05 00:36:06.086402 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-05 00:36:06.086421 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-05 00:36:06.086443 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-05 00:36:06.086463 | orchestrator | 2026-02-05 00:36:06.086482 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-05 00:36:06.086494 | orchestrator | Thursday 05 February 2026 00:35:49 +0000 (0:00:00.954) 0:00:07.007 ***** 2026-02-05 00:36:06.086505 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 00:36:06.086517 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-05 00:36:06.086528 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-05 00:36:06.086539 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 00:36:06.086550 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 00:36:06.086561 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 00:36:06.086572 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 00:36:06.086583 | orchestrator | 2026-02-05 00:36:06.086594 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-05 00:36:06.086605 | orchestrator | Thursday 05 February 2026 00:35:53 +0000 (0:00:03.527) 0:00:10.534 ***** 2026-02-05 00:36:06.086615 | orchestrator | changed: [testbed-manager] 2026-02-05 00:36:06.086626 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:36:06.086637 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:36:06.086647 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:36:06.086658 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:36:06.086668 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:36:06.086679 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:36:06.086690 | orchestrator | 2026-02-05 00:36:06.086718 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-05 00:36:06.086762 | orchestrator | Thursday 05 February 2026 00:35:54 +0000 (0:00:01.521) 0:00:12.056 ***** 2026-02-05 00:36:06.086781 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 00:36:06.086792 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 00:36:06.086803 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-05 00:36:06.086813 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-05 00:36:06.086824 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 00:36:06.086835 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 00:36:06.086845 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 00:36:06.086856 | orchestrator | 2026-02-05 00:36:06.086867 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-05 00:36:06.086877 | orchestrator | Thursday 05 February 2026 00:35:56 +0000 (0:00:01.711) 0:00:13.767 ***** 2026-02-05 00:36:06.086903 | orchestrator | ok: [testbed-manager] 2026-02-05 00:36:06.086914 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:36:06.086925 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:36:06.086936 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:36:06.086946 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:36:06.086957 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:36:06.086968 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:36:06.086978 | orchestrator | 2026-02-05 00:36:06.086990 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-05 00:36:06.087022 | orchestrator | Thursday 05 February 2026 00:35:57 +0000 (0:00:01.041) 0:00:14.809 ***** 2026-02-05 00:36:06.087034 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:36:06.087045 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:36:06.087056 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:36:06.087067 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:36:06.087078 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:36:06.087089 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:36:06.087099 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:36:06.087110 | orchestrator | 2026-02-05 00:36:06.087121 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-05 00:36:06.087132 | orchestrator | Thursday 05 February 2026 00:35:57 +0000 (0:00:00.566) 0:00:15.375 ***** 2026-02-05 00:36:06.087143 | orchestrator | ok: [testbed-manager] 2026-02-05 00:36:06.087154 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:36:06.087165 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:36:06.087176 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:36:06.087186 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:36:06.087197 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:36:06.087208 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:36:06.087218 | orchestrator | 2026-02-05 00:36:06.087229 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-05 00:36:06.087240 | orchestrator | Thursday 05 February 2026 00:36:00 +0000 (0:00:02.079) 0:00:17.455 ***** 2026-02-05 00:36:06.087251 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:36:06.087261 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:36:06.087272 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:36:06.087283 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:36:06.087294 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:36:06.087304 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:36:06.087317 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-05 00:36:06.087329 | orchestrator | 2026-02-05 00:36:06.087340 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-05 00:36:06.087351 | orchestrator | Thursday 05 February 2026 00:36:00 +0000 (0:00:00.801) 0:00:18.257 ***** 2026-02-05 00:36:06.087362 | orchestrator | ok: [testbed-manager] 2026-02-05 00:36:06.087373 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:36:06.087383 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:36:06.087394 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:36:06.087405 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:36:06.087415 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:36:06.087426 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:36:06.087437 | orchestrator | 2026-02-05 00:36:06.087448 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-05 00:36:06.087458 | orchestrator | Thursday 05 February 2026 00:36:02 +0000 (0:00:01.458) 0:00:19.715 ***** 2026-02-05 00:36:06.087476 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:36:06.087490 | orchestrator | 2026-02-05 00:36:06.087501 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-05 00:36:06.087512 | orchestrator | Thursday 05 February 2026 00:36:03 +0000 (0:00:01.066) 0:00:20.781 ***** 2026-02-05 00:36:06.087530 | orchestrator | ok: [testbed-manager] 2026-02-05 00:36:06.087541 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:36:06.087551 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:36:06.087562 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:36:06.087573 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:36:06.087583 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:36:06.087594 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:36:06.087605 | orchestrator | 2026-02-05 00:36:06.087616 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-05 00:36:06.087627 | orchestrator | Thursday 05 February 2026 00:36:04 +0000 (0:00:00.907) 0:00:21.689 ***** 2026-02-05 00:36:06.087637 | orchestrator | ok: [testbed-manager] 2026-02-05 00:36:06.087648 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:36:06.087658 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:36:06.087669 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:36:06.087679 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:36:06.087690 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:36:06.087700 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:36:06.087711 | orchestrator | 2026-02-05 00:36:06.087721 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-05 00:36:06.087753 | orchestrator | Thursday 05 February 2026 00:36:04 +0000 (0:00:00.689) 0:00:22.379 ***** 2026-02-05 00:36:06.087765 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 00:36:06.087776 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 00:36:06.087787 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 00:36:06.087798 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 00:36:06.087808 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 00:36:06.087819 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 00:36:06.087830 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 00:36:06.087840 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 00:36:06.087851 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 00:36:06.087862 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-05 00:36:06.087873 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 00:36:06.087883 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 00:36:06.087894 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 00:36:06.087905 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-05 00:36:06.087916 | orchestrator | 2026-02-05 00:36:06.087933 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-05 00:36:20.703484 | orchestrator | Thursday 05 February 2026 00:36:06 +0000 (0:00:01.147) 0:00:23.526 ***** 2026-02-05 00:36:20.703597 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:36:20.703615 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:36:20.703628 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:36:20.703639 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:36:20.703650 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:36:20.703662 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:36:20.703689 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:36:20.703702 | orchestrator | 2026-02-05 00:36:20.703803 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-05 00:36:20.703816 | orchestrator | Thursday 05 February 2026 00:36:06 +0000 (0:00:00.543) 0:00:24.070 ***** 2026-02-05 00:36:20.703829 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-4, testbed-node-2, testbed-node-5, testbed-node-3 2026-02-05 00:36:20.703868 | orchestrator | 2026-02-05 00:36:20.703880 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-05 00:36:20.703892 | orchestrator | Thursday 05 February 2026 00:36:10 +0000 (0:00:04.106) 0:00:28.176 ***** 2026-02-05 00:36:20.703905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:36:20.703917 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:36:20.703929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:36:20.703956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:36:20.703968 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:36:20.703986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:36:20.703997 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:36:20.704014 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:36:20.704035 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:36:20.704054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:36:20.704073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:36:20.704114 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:36:20.704134 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:36:20.704165 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:36:20.704184 | orchestrator | 2026-02-05 00:36:20.704204 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-05 00:36:20.704222 | orchestrator | Thursday 05 February 2026 00:36:15 +0000 (0:00:05.085) 0:00:33.261 ***** 2026-02-05 00:36:20.704240 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:36:20.704258 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:36:20.704278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:36:20.704298 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:36:20.704326 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:36:20.704345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:36:20.704363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:36:20.704383 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-05 00:36:20.704402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:36:20.704421 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:36:20.704440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:36:20.704458 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:36:20.704503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:36:33.013402 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-05 00:36:33.013559 | orchestrator | 2026-02-05 00:36:33.013578 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-05 00:36:33.013592 | orchestrator | Thursday 05 February 2026 00:36:20 +0000 (0:00:05.177) 0:00:38.439 ***** 2026-02-05 00:36:33.013605 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:36:33.013617 | orchestrator | 2026-02-05 00:36:33.013629 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-05 00:36:33.013640 | orchestrator | Thursday 05 February 2026 00:36:22 +0000 (0:00:01.107) 0:00:39.546 ***** 2026-02-05 00:36:33.013651 | orchestrator | ok: [testbed-manager] 2026-02-05 00:36:33.013664 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:36:33.013675 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:36:33.013686 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:36:33.013696 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:36:33.013759 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:36:33.013771 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:36:33.013782 | orchestrator | 2026-02-05 00:36:33.013793 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-05 00:36:33.013804 | orchestrator | Thursday 05 February 2026 00:36:23 +0000 (0:00:01.010) 0:00:40.557 ***** 2026-02-05 00:36:33.013815 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 00:36:33.013828 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 00:36:33.013839 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 00:36:33.013852 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 00:36:33.013864 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 00:36:33.013878 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 00:36:33.013908 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 00:36:33.013922 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 00:36:33.013935 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:36:33.013949 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 00:36:33.013962 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 00:36:33.013975 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 00:36:33.013987 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:36:33.014001 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 00:36:33.014014 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 00:36:33.014089 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 00:36:33.014103 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 00:36:33.014116 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 00:36:33.014153 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:36:33.014166 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 00:36:33.014179 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 00:36:33.014193 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 00:36:33.014206 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 00:36:33.014217 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:36:33.014229 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 00:36:33.014240 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 00:36:33.014251 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 00:36:33.014261 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 00:36:33.014272 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:36:33.014283 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:36:33.014294 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-05 00:36:33.014305 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-05 00:36:33.014316 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-05 00:36:33.014327 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-05 00:36:33.014338 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:36:33.014349 | orchestrator | 2026-02-05 00:36:33.014360 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-02-05 00:36:33.014407 | orchestrator | Thursday 05 February 2026 00:36:23 +0000 (0:00:00.849) 0:00:41.407 ***** 2026-02-05 00:36:33.014421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:36:33.014433 | orchestrator | 2026-02-05 00:36:33.014444 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-02-05 00:36:33.014455 | orchestrator | Thursday 05 February 2026 00:36:25 +0000 (0:00:01.235) 0:00:42.642 ***** 2026-02-05 00:36:33.014466 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:36:33.014477 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:36:33.014488 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:36:33.014499 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:36:33.014510 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:36:33.014520 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:36:33.014531 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:36:33.014542 | orchestrator | 2026-02-05 00:36:33.014553 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-02-05 00:36:33.014564 | orchestrator | Thursday 05 February 2026 00:36:25 +0000 (0:00:00.607) 0:00:43.250 ***** 2026-02-05 00:36:33.014575 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:36:33.014586 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:36:33.014597 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:36:33.014608 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:36:33.014619 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:36:33.014630 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:36:33.014641 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:36:33.014651 | orchestrator | 2026-02-05 00:36:33.014662 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-02-05 00:36:33.014673 | orchestrator | Thursday 05 February 2026 00:36:26 +0000 (0:00:00.756) 0:00:44.006 ***** 2026-02-05 00:36:33.014684 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:36:33.014695 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:36:33.014779 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:36:33.014792 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:36:33.014802 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:36:33.014813 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:36:33.014824 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:36:33.014835 | orchestrator | 2026-02-05 00:36:33.014846 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-02-05 00:36:33.014858 | orchestrator | Thursday 05 February 2026 00:36:27 +0000 (0:00:00.580) 0:00:44.586 ***** 2026-02-05 00:36:33.014869 | orchestrator | ok: [testbed-manager] 2026-02-05 00:36:33.014880 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:36:33.014891 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:36:33.014909 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:36:33.014920 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:36:33.014931 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:36:33.014942 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:36:33.014952 | orchestrator | 2026-02-05 00:36:33.014963 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-02-05 00:36:33.014974 | orchestrator | Thursday 05 February 2026 00:36:28 +0000 (0:00:01.607) 0:00:46.194 ***** 2026-02-05 00:36:33.014985 | orchestrator | ok: [testbed-manager] 2026-02-05 00:36:33.014996 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:36:33.015007 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:36:33.015017 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:36:33.015028 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:36:33.015039 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:36:33.015049 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:36:33.015060 | orchestrator | 2026-02-05 00:36:33.015071 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-02-05 00:36:33.015082 | orchestrator | Thursday 05 February 2026 00:36:29 +0000 (0:00:00.881) 0:00:47.076 ***** 2026-02-05 00:36:33.015094 | orchestrator | ok: [testbed-manager] 2026-02-05 00:36:33.015105 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:36:33.015115 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:36:33.015126 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:36:33.015137 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:36:33.015147 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:36:33.015158 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:36:33.015169 | orchestrator | 2026-02-05 00:36:33.015179 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-05 00:36:33.015191 | orchestrator | Thursday 05 February 2026 00:36:31 +0000 (0:00:02.109) 0:00:49.185 ***** 2026-02-05 00:36:33.015202 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:36:33.015213 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:36:33.015224 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:36:33.015235 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:36:33.015246 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:36:33.015257 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:36:33.015268 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:36:33.015279 | orchestrator | 2026-02-05 00:36:33.015290 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-05 00:36:33.015302 | orchestrator | Thursday 05 February 2026 00:36:32 +0000 (0:00:00.762) 0:00:49.947 ***** 2026-02-05 00:36:33.015313 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:36:33.015324 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:36:33.015335 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:36:33.015345 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:36:33.015356 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:36:33.015367 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:36:33.015378 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:36:33.015389 | orchestrator | 2026-02-05 00:36:33.015400 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:36:33.015412 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-05 00:36:33.015432 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-05 00:36:33.015452 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-05 00:36:33.350275 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-05 00:36:33.350356 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-05 00:36:33.350366 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-05 00:36:33.350374 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-05 00:36:33.350382 | orchestrator | 2026-02-05 00:36:33.350390 | orchestrator | 2026-02-05 00:36:33.350398 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:36:33.350407 | orchestrator | Thursday 05 February 2026 00:36:33 +0000 (0:00:00.506) 0:00:50.454 ***** 2026-02-05 00:36:33.350414 | orchestrator | =============================================================================== 2026-02-05 00:36:33.350422 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.18s 2026-02-05 00:36:33.350429 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.09s 2026-02-05 00:36:33.350436 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.11s 2026-02-05 00:36:33.350443 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.53s 2026-02-05 00:36:33.350451 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.11s 2026-02-05 00:36:33.350458 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.08s 2026-02-05 00:36:33.350465 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.05s 2026-02-05 00:36:33.350472 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.89s 2026-02-05 00:36:33.350479 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.71s 2026-02-05 00:36:33.350486 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.61s 2026-02-05 00:36:33.350494 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.52s 2026-02-05 00:36:33.350501 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.46s 2026-02-05 00:36:33.350509 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.24s 2026-02-05 00:36:33.350516 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.18s 2026-02-05 00:36:33.350523 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.15s 2026-02-05 00:36:33.350531 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.11s 2026-02-05 00:36:33.350538 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.07s 2026-02-05 00:36:33.350545 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.04s 2026-02-05 00:36:33.350552 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.01s 2026-02-05 00:36:33.350559 | orchestrator | osism.commons.network : Create required directories --------------------- 0.95s 2026-02-05 00:36:33.617899 | orchestrator | + osism apply wireguard 2026-02-05 00:36:45.664954 | orchestrator | 2026-02-05 00:36:45 | INFO  | Prepare task for execution of wireguard. 2026-02-05 00:36:45.727955 | orchestrator | 2026-02-05 00:36:45 | INFO  | Task f9a9d4d8-bb1a-4151-9a4a-1c4dad6a8bf6 (wireguard) was prepared for execution. 2026-02-05 00:36:45.728123 | orchestrator | 2026-02-05 00:36:45 | INFO  | It takes a moment until task f9a9d4d8-bb1a-4151-9a4a-1c4dad6a8bf6 (wireguard) has been started and output is visible here. 2026-02-05 00:37:02.732021 | orchestrator | 2026-02-05 00:37:02.732129 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-05 00:37:02.732141 | orchestrator | 2026-02-05 00:37:02.732149 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-05 00:37:02.732157 | orchestrator | Thursday 05 February 2026 00:36:49 +0000 (0:00:00.167) 0:00:00.167 ***** 2026-02-05 00:37:02.732164 | orchestrator | ok: [testbed-manager] 2026-02-05 00:37:02.732173 | orchestrator | 2026-02-05 00:37:02.732179 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-05 00:37:02.732186 | orchestrator | Thursday 05 February 2026 00:36:50 +0000 (0:00:01.134) 0:00:01.302 ***** 2026-02-05 00:37:02.732193 | orchestrator | changed: [testbed-manager] 2026-02-05 00:37:02.732201 | orchestrator | 2026-02-05 00:37:02.732208 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-05 00:37:02.732215 | orchestrator | Thursday 05 February 2026 00:36:56 +0000 (0:00:05.618) 0:00:06.920 ***** 2026-02-05 00:37:02.732222 | orchestrator | changed: [testbed-manager] 2026-02-05 00:37:02.732229 | orchestrator | 2026-02-05 00:37:02.732235 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-05 00:37:02.732242 | orchestrator | Thursday 05 February 2026 00:36:56 +0000 (0:00:00.550) 0:00:07.471 ***** 2026-02-05 00:37:02.732249 | orchestrator | changed: [testbed-manager] 2026-02-05 00:37:02.732256 | orchestrator | 2026-02-05 00:37:02.732263 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-05 00:37:02.732270 | orchestrator | Thursday 05 February 2026 00:36:56 +0000 (0:00:00.404) 0:00:07.876 ***** 2026-02-05 00:37:02.732276 | orchestrator | ok: [testbed-manager] 2026-02-05 00:37:02.732283 | orchestrator | 2026-02-05 00:37:02.732290 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-05 00:37:02.732297 | orchestrator | Thursday 05 February 2026 00:36:57 +0000 (0:00:00.532) 0:00:08.408 ***** 2026-02-05 00:37:02.732304 | orchestrator | ok: [testbed-manager] 2026-02-05 00:37:02.732310 | orchestrator | 2026-02-05 00:37:02.732317 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-05 00:37:02.732324 | orchestrator | Thursday 05 February 2026 00:36:57 +0000 (0:00:00.373) 0:00:08.782 ***** 2026-02-05 00:37:02.732331 | orchestrator | ok: [testbed-manager] 2026-02-05 00:37:02.732338 | orchestrator | 2026-02-05 00:37:02.732345 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-05 00:37:02.732351 | orchestrator | Thursday 05 February 2026 00:36:58 +0000 (0:00:00.385) 0:00:09.167 ***** 2026-02-05 00:37:02.732357 | orchestrator | changed: [testbed-manager] 2026-02-05 00:37:02.732365 | orchestrator | 2026-02-05 00:37:02.732372 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-05 00:37:02.732379 | orchestrator | Thursday 05 February 2026 00:36:59 +0000 (0:00:01.026) 0:00:10.194 ***** 2026-02-05 00:37:02.732385 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-05 00:37:02.732392 | orchestrator | changed: [testbed-manager] 2026-02-05 00:37:02.732398 | orchestrator | 2026-02-05 00:37:02.732404 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-05 00:37:02.732410 | orchestrator | Thursday 05 February 2026 00:37:00 +0000 (0:00:00.824) 0:00:11.018 ***** 2026-02-05 00:37:02.732435 | orchestrator | changed: [testbed-manager] 2026-02-05 00:37:02.732442 | orchestrator | 2026-02-05 00:37:02.732449 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-05 00:37:02.732456 | orchestrator | Thursday 05 February 2026 00:37:01 +0000 (0:00:01.443) 0:00:12.461 ***** 2026-02-05 00:37:02.732463 | orchestrator | changed: [testbed-manager] 2026-02-05 00:37:02.732470 | orchestrator | 2026-02-05 00:37:02.732477 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:37:02.732504 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:37:02.732513 | orchestrator | 2026-02-05 00:37:02.732520 | orchestrator | 2026-02-05 00:37:02.732527 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:37:02.732534 | orchestrator | Thursday 05 February 2026 00:37:02 +0000 (0:00:00.890) 0:00:13.352 ***** 2026-02-05 00:37:02.732540 | orchestrator | =============================================================================== 2026-02-05 00:37:02.732547 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.62s 2026-02-05 00:37:02.732558 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.44s 2026-02-05 00:37:02.732566 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.13s 2026-02-05 00:37:02.732573 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.03s 2026-02-05 00:37:02.732581 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.89s 2026-02-05 00:37:02.732588 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.82s 2026-02-05 00:37:02.732596 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-02-05 00:37:02.732603 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2026-02-05 00:37:02.732610 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.40s 2026-02-05 00:37:02.732617 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.39s 2026-02-05 00:37:02.732624 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.37s 2026-02-05 00:37:03.015815 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-05 00:37:03.041026 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-05 00:37:03.041117 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-05 00:37:03.122771 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 170 0 --:--:-- --:--:-- --:--:-- 172 2026-02-05 00:37:03.136830 | orchestrator | + osism apply --environment custom workarounds 2026-02-05 00:37:05.070547 | orchestrator | 2026-02-05 00:37:05 | INFO  | Trying to run play workarounds in environment custom 2026-02-05 00:37:15.117611 | orchestrator | 2026-02-05 00:37:15 | INFO  | Prepare task for execution of workarounds. 2026-02-05 00:37:15.185915 | orchestrator | 2026-02-05 00:37:15 | INFO  | Task 820ce0a2-66cc-4457-9762-abe03266f208 (workarounds) was prepared for execution. 2026-02-05 00:37:15.186012 | orchestrator | 2026-02-05 00:37:15 | INFO  | It takes a moment until task 820ce0a2-66cc-4457-9762-abe03266f208 (workarounds) has been started and output is visible here. 2026-02-05 00:37:38.359671 | orchestrator | 2026-02-05 00:37:38.359792 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:37:38.359811 | orchestrator | 2026-02-05 00:37:38.359823 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-05 00:37:38.359835 | orchestrator | Thursday 05 February 2026 00:37:18 +0000 (0:00:00.114) 0:00:00.114 ***** 2026-02-05 00:37:38.359847 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-05 00:37:38.359858 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-05 00:37:38.359869 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-05 00:37:38.359880 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-05 00:37:38.359891 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-05 00:37:38.359902 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-05 00:37:38.359913 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-05 00:37:38.359944 | orchestrator | 2026-02-05 00:37:38.359956 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-05 00:37:38.359967 | orchestrator | 2026-02-05 00:37:38.359978 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-05 00:37:38.359988 | orchestrator | Thursday 05 February 2026 00:37:19 +0000 (0:00:00.667) 0:00:00.781 ***** 2026-02-05 00:37:38.359999 | orchestrator | ok: [testbed-manager] 2026-02-05 00:37:38.360011 | orchestrator | 2026-02-05 00:37:38.360022 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-05 00:37:38.360033 | orchestrator | 2026-02-05 00:37:38.360044 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-05 00:37:38.360054 | orchestrator | Thursday 05 February 2026 00:37:21 +0000 (0:00:02.091) 0:00:02.873 ***** 2026-02-05 00:37:38.360065 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:37:38.360076 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:37:38.360087 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:37:38.360098 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:37:38.360108 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:37:38.360125 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:37:38.360144 | orchestrator | 2026-02-05 00:37:38.360165 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-05 00:37:38.360183 | orchestrator | 2026-02-05 00:37:38.360202 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-05 00:37:38.360223 | orchestrator | Thursday 05 February 2026 00:37:23 +0000 (0:00:01.725) 0:00:04.599 ***** 2026-02-05 00:37:38.360243 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-05 00:37:38.360266 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-05 00:37:38.360289 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-05 00:37:38.360310 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-05 00:37:38.360327 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-05 00:37:38.360349 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-05 00:37:38.360363 | orchestrator | 2026-02-05 00:37:38.360375 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-05 00:37:38.360389 | orchestrator | Thursday 05 February 2026 00:37:24 +0000 (0:00:01.436) 0:00:06.035 ***** 2026-02-05 00:37:38.360402 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:37:38.360415 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:37:38.360428 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:37:38.360440 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:37:38.360452 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:37:38.360464 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:37:38.360476 | orchestrator | 2026-02-05 00:37:38.360489 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-05 00:37:38.360502 | orchestrator | Thursday 05 February 2026 00:37:28 +0000 (0:00:03.638) 0:00:09.674 ***** 2026-02-05 00:37:38.360515 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:37:38.360527 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:37:38.360538 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:37:38.360549 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:37:38.360560 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:37:38.360570 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:37:38.360581 | orchestrator | 2026-02-05 00:37:38.360592 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-05 00:37:38.360603 | orchestrator | 2026-02-05 00:37:38.360614 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-05 00:37:38.360624 | orchestrator | Thursday 05 February 2026 00:37:29 +0000 (0:00:00.649) 0:00:10.324 ***** 2026-02-05 00:37:38.360736 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:37:38.360750 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:37:38.360761 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:37:38.360772 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:37:38.360783 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:37:38.360793 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:37:38.360804 | orchestrator | changed: [testbed-manager] 2026-02-05 00:37:38.360815 | orchestrator | 2026-02-05 00:37:38.360826 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-05 00:37:38.360837 | orchestrator | Thursday 05 February 2026 00:37:30 +0000 (0:00:01.481) 0:00:11.806 ***** 2026-02-05 00:37:38.360848 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:37:38.360858 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:37:38.360869 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:37:38.360880 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:37:38.360890 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:37:38.360901 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:37:38.360956 | orchestrator | changed: [testbed-manager] 2026-02-05 00:37:38.360968 | orchestrator | 2026-02-05 00:37:38.360980 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-05 00:37:38.360991 | orchestrator | Thursday 05 February 2026 00:37:32 +0000 (0:00:01.510) 0:00:13.317 ***** 2026-02-05 00:37:38.361002 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:37:38.361013 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:37:38.361023 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:37:38.361066 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:37:38.361080 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:37:38.361091 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:37:38.361102 | orchestrator | ok: [testbed-manager] 2026-02-05 00:37:38.361113 | orchestrator | 2026-02-05 00:37:38.361124 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-05 00:37:38.361135 | orchestrator | Thursday 05 February 2026 00:37:33 +0000 (0:00:01.473) 0:00:14.790 ***** 2026-02-05 00:37:38.361146 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:37:38.361157 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:37:38.361168 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:37:38.361179 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:37:38.361190 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:37:38.361200 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:37:38.361211 | orchestrator | changed: [testbed-manager] 2026-02-05 00:37:38.361222 | orchestrator | 2026-02-05 00:37:38.361233 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-05 00:37:38.361244 | orchestrator | Thursday 05 February 2026 00:37:35 +0000 (0:00:01.656) 0:00:16.446 ***** 2026-02-05 00:37:38.361255 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:37:38.361265 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:37:38.361276 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:37:38.361287 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:37:38.361298 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:37:38.361308 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:37:38.361319 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:37:38.361329 | orchestrator | 2026-02-05 00:37:38.361340 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-05 00:37:38.361351 | orchestrator | 2026-02-05 00:37:38.361362 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-05 00:37:38.361373 | orchestrator | Thursday 05 February 2026 00:37:35 +0000 (0:00:00.553) 0:00:16.999 ***** 2026-02-05 00:37:38.361384 | orchestrator | ok: [testbed-manager] 2026-02-05 00:37:38.361395 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:37:38.361406 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:37:38.361416 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:37:38.361427 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:37:38.361438 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:37:38.361456 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:37:38.361467 | orchestrator | 2026-02-05 00:37:38.361478 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:37:38.361491 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:37:38.361503 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:37:38.361514 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:37:38.361531 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:37:38.361543 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:37:38.361554 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:37:38.361565 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:37:38.361576 | orchestrator | 2026-02-05 00:37:38.361587 | orchestrator | 2026-02-05 00:37:38.361598 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:37:38.361609 | orchestrator | Thursday 05 February 2026 00:37:38 +0000 (0:00:02.564) 0:00:19.564 ***** 2026-02-05 00:37:38.361620 | orchestrator | =============================================================================== 2026-02-05 00:37:38.361687 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.64s 2026-02-05 00:37:38.361701 | orchestrator | Install python3-docker -------------------------------------------------- 2.56s 2026-02-05 00:37:38.361712 | orchestrator | Apply netplan configuration --------------------------------------------- 2.09s 2026-02-05 00:37:38.361723 | orchestrator | Apply netplan configuration --------------------------------------------- 1.73s 2026-02-05 00:37:38.361768 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.66s 2026-02-05 00:37:38.361779 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.51s 2026-02-05 00:37:38.361790 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.48s 2026-02-05 00:37:38.361801 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.47s 2026-02-05 00:37:38.361812 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.44s 2026-02-05 00:37:38.361823 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.67s 2026-02-05 00:37:38.361834 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.65s 2026-02-05 00:37:38.361854 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.55s 2026-02-05 00:37:38.743265 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-05 00:37:50.546910 | orchestrator | 2026-02-05 00:37:50 | INFO  | Prepare task for execution of reboot. 2026-02-05 00:37:50.617257 | orchestrator | 2026-02-05 00:37:50 | INFO  | Task 9a2ae39c-8d34-4b15-b979-1ee3f56404d3 (reboot) was prepared for execution. 2026-02-05 00:37:50.617324 | orchestrator | 2026-02-05 00:37:50 | INFO  | It takes a moment until task 9a2ae39c-8d34-4b15-b979-1ee3f56404d3 (reboot) has been started and output is visible here. 2026-02-05 00:38:00.372760 | orchestrator | 2026-02-05 00:38:00.372859 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-05 00:38:00.372876 | orchestrator | 2026-02-05 00:38:00.372888 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-05 00:38:00.372922 | orchestrator | Thursday 05 February 2026 00:37:54 +0000 (0:00:00.154) 0:00:00.154 ***** 2026-02-05 00:38:00.372940 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:38:00.372961 | orchestrator | 2026-02-05 00:38:00.372981 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-05 00:38:00.372999 | orchestrator | Thursday 05 February 2026 00:37:54 +0000 (0:00:00.083) 0:00:00.238 ***** 2026-02-05 00:38:00.373012 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:38:00.373023 | orchestrator | 2026-02-05 00:38:00.373034 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-05 00:38:00.373045 | orchestrator | Thursday 05 February 2026 00:37:55 +0000 (0:00:00.909) 0:00:01.148 ***** 2026-02-05 00:38:00.373055 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:38:00.373066 | orchestrator | 2026-02-05 00:38:00.373077 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-05 00:38:00.373088 | orchestrator | 2026-02-05 00:38:00.373099 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-05 00:38:00.373110 | orchestrator | Thursday 05 February 2026 00:37:55 +0000 (0:00:00.099) 0:00:01.247 ***** 2026-02-05 00:38:00.373121 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:38:00.373131 | orchestrator | 2026-02-05 00:38:00.373142 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-05 00:38:00.373153 | orchestrator | Thursday 05 February 2026 00:37:55 +0000 (0:00:00.090) 0:00:01.338 ***** 2026-02-05 00:38:00.373164 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:38:00.373194 | orchestrator | 2026-02-05 00:38:00.373206 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-05 00:38:00.373217 | orchestrator | Thursday 05 February 2026 00:37:56 +0000 (0:00:00.641) 0:00:01.980 ***** 2026-02-05 00:38:00.373229 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:38:00.373240 | orchestrator | 2026-02-05 00:38:00.373253 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-05 00:38:00.373267 | orchestrator | 2026-02-05 00:38:00.373281 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-05 00:38:00.373302 | orchestrator | Thursday 05 February 2026 00:37:56 +0000 (0:00:00.107) 0:00:02.087 ***** 2026-02-05 00:38:00.373326 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:38:00.373356 | orchestrator | 2026-02-05 00:38:00.373377 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-05 00:38:00.373396 | orchestrator | Thursday 05 February 2026 00:37:56 +0000 (0:00:00.183) 0:00:02.271 ***** 2026-02-05 00:38:00.373415 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:38:00.373433 | orchestrator | 2026-02-05 00:38:00.373468 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-05 00:38:00.373488 | orchestrator | Thursday 05 February 2026 00:37:57 +0000 (0:00:00.658) 0:00:02.929 ***** 2026-02-05 00:38:00.373507 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:38:00.373529 | orchestrator | 2026-02-05 00:38:00.373549 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-05 00:38:00.373570 | orchestrator | 2026-02-05 00:38:00.373592 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-05 00:38:00.373637 | orchestrator | Thursday 05 February 2026 00:37:57 +0000 (0:00:00.104) 0:00:03.034 ***** 2026-02-05 00:38:00.373660 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:38:00.373679 | orchestrator | 2026-02-05 00:38:00.373699 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-05 00:38:00.373711 | orchestrator | Thursday 05 February 2026 00:37:57 +0000 (0:00:00.093) 0:00:03.127 ***** 2026-02-05 00:38:00.373722 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:38:00.373733 | orchestrator | 2026-02-05 00:38:00.373744 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-05 00:38:00.373755 | orchestrator | Thursday 05 February 2026 00:37:58 +0000 (0:00:00.688) 0:00:03.815 ***** 2026-02-05 00:38:00.373766 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:38:00.373790 | orchestrator | 2026-02-05 00:38:00.373801 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-05 00:38:00.373812 | orchestrator | 2026-02-05 00:38:00.373823 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-05 00:38:00.373834 | orchestrator | Thursday 05 February 2026 00:37:58 +0000 (0:00:00.109) 0:00:03.925 ***** 2026-02-05 00:38:00.373845 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:38:00.373856 | orchestrator | 2026-02-05 00:38:00.373867 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-05 00:38:00.373878 | orchestrator | Thursday 05 February 2026 00:37:58 +0000 (0:00:00.101) 0:00:04.026 ***** 2026-02-05 00:38:00.373888 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:38:00.373899 | orchestrator | 2026-02-05 00:38:00.373910 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-05 00:38:00.373921 | orchestrator | Thursday 05 February 2026 00:37:59 +0000 (0:00:00.664) 0:00:04.691 ***** 2026-02-05 00:38:00.373932 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:38:00.373943 | orchestrator | 2026-02-05 00:38:00.373954 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-05 00:38:00.373965 | orchestrator | 2026-02-05 00:38:00.373976 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-05 00:38:00.373986 | orchestrator | Thursday 05 February 2026 00:37:59 +0000 (0:00:00.110) 0:00:04.802 ***** 2026-02-05 00:38:00.373997 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:38:00.374008 | orchestrator | 2026-02-05 00:38:00.374070 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-05 00:38:00.374082 | orchestrator | Thursday 05 February 2026 00:37:59 +0000 (0:00:00.098) 0:00:04.900 ***** 2026-02-05 00:38:00.374093 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:38:00.374104 | orchestrator | 2026-02-05 00:38:00.374115 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-05 00:38:00.374126 | orchestrator | Thursday 05 February 2026 00:38:00 +0000 (0:00:00.681) 0:00:05.581 ***** 2026-02-05 00:38:00.374157 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:38:00.374169 | orchestrator | 2026-02-05 00:38:00.374180 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:38:00.374192 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:38:00.374205 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:38:00.374216 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:38:00.374227 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:38:00.374238 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:38:00.374248 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:38:00.374259 | orchestrator | 2026-02-05 00:38:00.374270 | orchestrator | 2026-02-05 00:38:00.374281 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:38:00.374293 | orchestrator | Thursday 05 February 2026 00:38:00 +0000 (0:00:00.030) 0:00:05.612 ***** 2026-02-05 00:38:00.374304 | orchestrator | =============================================================================== 2026-02-05 00:38:00.374315 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.24s 2026-02-05 00:38:00.374325 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.65s 2026-02-05 00:38:00.374336 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.56s 2026-02-05 00:38:00.664766 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-05 00:38:12.645987 | orchestrator | 2026-02-05 00:38:12 | INFO  | Prepare task for execution of wait-for-connection. 2026-02-05 00:38:12.719478 | orchestrator | 2026-02-05 00:38:12 | INFO  | Task 109d2104-a93d-45d2-9e04-1fc230bb0f69 (wait-for-connection) was prepared for execution. 2026-02-05 00:38:12.719568 | orchestrator | 2026-02-05 00:38:12 | INFO  | It takes a moment until task 109d2104-a93d-45d2-9e04-1fc230bb0f69 (wait-for-connection) has been started and output is visible here. 2026-02-05 00:38:28.553746 | orchestrator | 2026-02-05 00:38:28.553909 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-05 00:38:28.553930 | orchestrator | 2026-02-05 00:38:28.553943 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-05 00:38:28.553954 | orchestrator | Thursday 05 February 2026 00:38:16 +0000 (0:00:00.207) 0:00:00.207 ***** 2026-02-05 00:38:28.553966 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:38:28.553978 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:38:28.553990 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:38:28.554001 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:38:28.554011 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:38:28.554087 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:38:28.554098 | orchestrator | 2026-02-05 00:38:28.554110 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:38:28.554122 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:38:28.554135 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:38:28.554147 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:38:28.554158 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:38:28.554169 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:38:28.554180 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:38:28.554191 | orchestrator | 2026-02-05 00:38:28.554202 | orchestrator | 2026-02-05 00:38:28.554213 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:38:28.554224 | orchestrator | Thursday 05 February 2026 00:38:28 +0000 (0:00:11.441) 0:00:11.649 ***** 2026-02-05 00:38:28.554236 | orchestrator | =============================================================================== 2026-02-05 00:38:28.554248 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.44s 2026-02-05 00:38:28.737654 | orchestrator | + osism apply hddtemp 2026-02-05 00:38:40.551235 | orchestrator | 2026-02-05 00:38:40 | INFO  | Prepare task for execution of hddtemp. 2026-02-05 00:38:40.619021 | orchestrator | 2026-02-05 00:38:40 | INFO  | Task 48a4e14e-1630-4846-9bb9-cd17a8b0f2f9 (hddtemp) was prepared for execution. 2026-02-05 00:38:40.619099 | orchestrator | 2026-02-05 00:38:40 | INFO  | It takes a moment until task 48a4e14e-1630-4846-9bb9-cd17a8b0f2f9 (hddtemp) has been started and output is visible here. 2026-02-05 00:39:07.344713 | orchestrator | 2026-02-05 00:39:07.344806 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-05 00:39:07.344817 | orchestrator | 2026-02-05 00:39:07.344825 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-05 00:39:07.344832 | orchestrator | Thursday 05 February 2026 00:38:44 +0000 (0:00:00.267) 0:00:00.267 ***** 2026-02-05 00:39:07.344839 | orchestrator | ok: [testbed-manager] 2026-02-05 00:39:07.344869 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:39:07.344877 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:39:07.344884 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:39:07.344890 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:39:07.344898 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:39:07.344904 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:39:07.344910 | orchestrator | 2026-02-05 00:39:07.344916 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-05 00:39:07.344922 | orchestrator | Thursday 05 February 2026 00:38:45 +0000 (0:00:00.687) 0:00:00.955 ***** 2026-02-05 00:39:07.344930 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:39:07.344937 | orchestrator | 2026-02-05 00:39:07.344943 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-05 00:39:07.344949 | orchestrator | Thursday 05 February 2026 00:38:46 +0000 (0:00:01.180) 0:00:02.135 ***** 2026-02-05 00:39:07.344954 | orchestrator | ok: [testbed-manager] 2026-02-05 00:39:07.344961 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:39:07.344966 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:39:07.344972 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:39:07.344978 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:39:07.344983 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:39:07.344989 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:39:07.344995 | orchestrator | 2026-02-05 00:39:07.345001 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-05 00:39:07.345007 | orchestrator | Thursday 05 February 2026 00:38:48 +0000 (0:00:01.900) 0:00:04.036 ***** 2026-02-05 00:39:07.345013 | orchestrator | changed: [testbed-manager] 2026-02-05 00:39:07.345020 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:39:07.345026 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:39:07.345032 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:39:07.345038 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:39:07.345044 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:39:07.345050 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:39:07.345056 | orchestrator | 2026-02-05 00:39:07.345075 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-05 00:39:07.345081 | orchestrator | Thursday 05 February 2026 00:38:49 +0000 (0:00:01.008) 0:00:05.044 ***** 2026-02-05 00:39:07.345087 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:39:07.345094 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:39:07.345100 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:39:07.345106 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:39:07.345112 | orchestrator | ok: [testbed-manager] 2026-02-05 00:39:07.345119 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:39:07.345125 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:39:07.345130 | orchestrator | 2026-02-05 00:39:07.345137 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-05 00:39:07.345142 | orchestrator | Thursday 05 February 2026 00:38:50 +0000 (0:00:01.060) 0:00:06.104 ***** 2026-02-05 00:39:07.345148 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:39:07.345154 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:39:07.345160 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:39:07.345165 | orchestrator | changed: [testbed-manager] 2026-02-05 00:39:07.345171 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:39:07.345177 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:39:07.345183 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:39:07.345188 | orchestrator | 2026-02-05 00:39:07.345194 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-05 00:39:07.345200 | orchestrator | Thursday 05 February 2026 00:38:51 +0000 (0:00:00.695) 0:00:06.800 ***** 2026-02-05 00:39:07.345206 | orchestrator | changed: [testbed-manager] 2026-02-05 00:39:07.345212 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:39:07.345224 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:39:07.345230 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:39:07.345235 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:39:07.345241 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:39:07.345247 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:39:07.345254 | orchestrator | 2026-02-05 00:39:07.345260 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-05 00:39:07.345266 | orchestrator | Thursday 05 February 2026 00:39:04 +0000 (0:00:13.006) 0:00:19.806 ***** 2026-02-05 00:39:07.345272 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:39:07.345278 | orchestrator | 2026-02-05 00:39:07.345284 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-05 00:39:07.345290 | orchestrator | Thursday 05 February 2026 00:39:05 +0000 (0:00:01.162) 0:00:20.969 ***** 2026-02-05 00:39:07.345295 | orchestrator | changed: [testbed-manager] 2026-02-05 00:39:07.345301 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:39:07.345307 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:39:07.345313 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:39:07.345319 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:39:07.345324 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:39:07.345330 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:39:07.345336 | orchestrator | 2026-02-05 00:39:07.345342 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:39:07.345348 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:39:07.345370 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:39:07.345376 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:39:07.345382 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:39:07.345388 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:39:07.345394 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:39:07.345399 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:39:07.345405 | orchestrator | 2026-02-05 00:39:07.345411 | orchestrator | 2026-02-05 00:39:07.345417 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:39:07.345423 | orchestrator | Thursday 05 February 2026 00:39:07 +0000 (0:00:01.655) 0:00:22.625 ***** 2026-02-05 00:39:07.345429 | orchestrator | =============================================================================== 2026-02-05 00:39:07.345435 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.01s 2026-02-05 00:39:07.345441 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.90s 2026-02-05 00:39:07.345447 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.66s 2026-02-05 00:39:07.345453 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.18s 2026-02-05 00:39:07.345459 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.16s 2026-02-05 00:39:07.345465 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.06s 2026-02-05 00:39:07.345475 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.01s 2026-02-05 00:39:07.345481 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.70s 2026-02-05 00:39:07.345491 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.69s 2026-02-05 00:39:07.526930 | orchestrator | ++ semver latest 7.1.1 2026-02-05 00:39:07.571928 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-05 00:39:07.572019 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-05 00:39:07.572034 | orchestrator | + sudo systemctl restart manager.service 2026-02-05 00:39:20.719736 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-05 00:39:20.719810 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-05 00:39:20.719816 | orchestrator | + local max_attempts=60 2026-02-05 00:39:20.719822 | orchestrator | + local name=ceph-ansible 2026-02-05 00:39:20.719826 | orchestrator | + local attempt_num=1 2026-02-05 00:39:20.719831 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:39:20.755082 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:39:20.755168 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:39:20.755177 | orchestrator | + sleep 5 2026-02-05 00:39:25.759128 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:39:25.786790 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:39:25.786883 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:39:25.786898 | orchestrator | + sleep 5 2026-02-05 00:39:30.789916 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:39:30.825458 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:39:30.825547 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:39:30.825590 | orchestrator | + sleep 5 2026-02-05 00:39:35.829998 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:39:35.860982 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:39:35.861077 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:39:35.861093 | orchestrator | + sleep 5 2026-02-05 00:39:40.864932 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:39:40.906635 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:39:40.906753 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:39:40.906771 | orchestrator | + sleep 5 2026-02-05 00:39:45.911074 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:39:45.953862 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:39:45.953945 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:39:45.953960 | orchestrator | + sleep 5 2026-02-05 00:39:50.958896 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:39:50.994733 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:39:50.994828 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:39:50.994843 | orchestrator | + sleep 5 2026-02-05 00:39:56.000243 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:39:56.030873 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-05 00:39:56.030969 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:39:56.030986 | orchestrator | + sleep 5 2026-02-05 00:40:01.033938 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:40:01.070519 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-05 00:40:01.070663 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:40:01.070676 | orchestrator | + sleep 5 2026-02-05 00:40:06.074905 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:40:06.110915 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-05 00:40:06.111009 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:40:06.111026 | orchestrator | + sleep 5 2026-02-05 00:40:11.114358 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:40:11.147396 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-05 00:40:11.147497 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:40:11.147511 | orchestrator | + sleep 5 2026-02-05 00:40:16.151827 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:40:16.188499 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-05 00:40:16.188622 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:40:16.188639 | orchestrator | + sleep 5 2026-02-05 00:40:21.192387 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:40:21.229765 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-05 00:40:21.229885 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-05 00:40:21.229904 | orchestrator | + sleep 5 2026-02-05 00:40:26.233838 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-05 00:40:26.271925 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:40:26.272024 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-05 00:40:26.272041 | orchestrator | + local max_attempts=60 2026-02-05 00:40:26.272055 | orchestrator | + local name=kolla-ansible 2026-02-05 00:40:26.272066 | orchestrator | + local attempt_num=1 2026-02-05 00:40:26.273132 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-05 00:40:26.305431 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:40:26.305587 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-05 00:40:26.305599 | orchestrator | + local max_attempts=60 2026-02-05 00:40:26.305606 | orchestrator | + local name=osism-ansible 2026-02-05 00:40:26.305613 | orchestrator | + local attempt_num=1 2026-02-05 00:40:26.305627 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-05 00:40:26.332130 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-05 00:40:26.332214 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-05 00:40:26.332228 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-05 00:40:26.489699 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-05 00:40:26.608963 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-05 00:40:26.714752 | orchestrator | ARA in osism-ansible already disabled. 2026-02-05 00:40:26.864009 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-05 00:40:26.864125 | orchestrator | + osism apply gather-facts 2026-02-05 00:40:38.715076 | orchestrator | 2026-02-05 00:40:38 | INFO  | Prepare task for execution of gather-facts. 2026-02-05 00:40:38.774404 | orchestrator | 2026-02-05 00:40:38 | INFO  | Task 575c8777-1ecb-4d45-8a1c-c1f8bf81613e (gather-facts) was prepared for execution. 2026-02-05 00:40:38.774498 | orchestrator | 2026-02-05 00:40:38 | INFO  | It takes a moment until task 575c8777-1ecb-4d45-8a1c-c1f8bf81613e (gather-facts) has been started and output is visible here. 2026-02-05 00:40:51.878268 | orchestrator | 2026-02-05 00:40:51.878378 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-05 00:40:51.878396 | orchestrator | 2026-02-05 00:40:51.878409 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 00:40:51.878422 | orchestrator | Thursday 05 February 2026 00:40:42 +0000 (0:00:00.196) 0:00:00.196 ***** 2026-02-05 00:40:51.878434 | orchestrator | ok: [testbed-manager] 2026-02-05 00:40:51.878447 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:40:51.878458 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:40:51.878468 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:40:51.878479 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:40:51.878491 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:40:51.878502 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:40:51.878513 | orchestrator | 2026-02-05 00:40:51.878589 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-05 00:40:51.878603 | orchestrator | 2026-02-05 00:40:51.878610 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-05 00:40:51.878616 | orchestrator | Thursday 05 February 2026 00:40:51 +0000 (0:00:08.635) 0:00:08.832 ***** 2026-02-05 00:40:51.878623 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:40:51.878633 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:40:51.878644 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:40:51.878652 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:40:51.878658 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:40:51.878665 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:40:51.878671 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:40:51.878677 | orchestrator | 2026-02-05 00:40:51.878701 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:40:51.878709 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:40:51.878737 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:40:51.878744 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:40:51.878750 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:40:51.878757 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:40:51.878763 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:40:51.878769 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 00:40:51.878775 | orchestrator | 2026-02-05 00:40:51.878781 | orchestrator | 2026-02-05 00:40:51.878788 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:40:51.878794 | orchestrator | Thursday 05 February 2026 00:40:51 +0000 (0:00:00.440) 0:00:09.272 ***** 2026-02-05 00:40:51.878800 | orchestrator | =============================================================================== 2026-02-05 00:40:51.878806 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.64s 2026-02-05 00:40:51.878812 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.44s 2026-02-05 00:40:52.115868 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-05 00:40:52.127598 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-05 00:40:52.140380 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-05 00:40:52.147476 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-05 00:40:52.157710 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-05 00:40:52.165287 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-05 00:40:52.172952 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-05 00:40:52.180366 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-05 00:40:52.189723 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-05 00:40:52.198454 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-05 00:40:52.210554 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-05 00:40:52.215350 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-05 00:40:52.229324 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-05 00:40:52.260955 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-05 00:40:52.271004 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-05 00:40:52.279888 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-05 00:40:52.289809 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-05 00:40:52.298752 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-05 00:40:52.306143 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-05 00:40:52.314127 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-05 00:40:52.323064 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-05 00:40:52.329862 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-05 00:40:52.337038 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-05 00:40:52.345838 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-05 00:40:52.685930 | orchestrator | ok: Runtime: 0:23:35.046113 2026-02-05 00:40:52.794999 | 2026-02-05 00:40:52.795139 | TASK [Deploy services] 2026-02-05 00:40:53.328042 | orchestrator | skipping: Conditional result was False 2026-02-05 00:40:53.346154 | 2026-02-05 00:40:53.346334 | TASK [Deploy in a nutshell] 2026-02-05 00:40:54.067124 | orchestrator | + set -e 2026-02-05 00:40:54.067316 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-05 00:40:54.067352 | orchestrator | ++ export INTERACTIVE=false 2026-02-05 00:40:54.067377 | orchestrator | ++ INTERACTIVE=false 2026-02-05 00:40:54.067391 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-05 00:40:54.067405 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-05 00:40:54.067419 | orchestrator | + source /opt/manager-vars.sh 2026-02-05 00:40:54.067464 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-05 00:40:54.067494 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-05 00:40:54.067508 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-05 00:40:54.067549 | orchestrator | ++ CEPH_VERSION=reef 2026-02-05 00:40:54.067579 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-05 00:40:54.067599 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-05 00:40:54.067611 | orchestrator | ++ export MANAGER_VERSION=latest 2026-02-05 00:40:54.067633 | orchestrator | ++ MANAGER_VERSION=latest 2026-02-05 00:40:54.067644 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-05 00:40:54.067658 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-05 00:40:54.067669 | orchestrator | ++ export ARA=false 2026-02-05 00:40:54.067681 | orchestrator | ++ ARA=false 2026-02-05 00:40:54.067692 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-05 00:40:54.067705 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-05 00:40:54.067716 | orchestrator | ++ export TEMPEST=true 2026-02-05 00:40:54.067727 | orchestrator | ++ TEMPEST=true 2026-02-05 00:40:54.067738 | orchestrator | ++ export IS_ZUUL=true 2026-02-05 00:40:54.067749 | orchestrator | ++ IS_ZUUL=true 2026-02-05 00:40:54.067761 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-02-05 00:40:54.067772 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.23 2026-02-05 00:40:54.067783 | orchestrator | ++ export EXTERNAL_API=false 2026-02-05 00:40:54.067795 | orchestrator | ++ EXTERNAL_API=false 2026-02-05 00:40:54.067805 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-05 00:40:54.067817 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-05 00:40:54.067828 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-05 00:40:54.067839 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-05 00:40:54.067850 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-05 00:40:54.067862 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-05 00:40:54.067873 | orchestrator | + echo 2026-02-05 00:40:54.067885 | orchestrator | 2026-02-05 00:40:54.067896 | orchestrator | # PULL IMAGES 2026-02-05 00:40:54.067907 | orchestrator | 2026-02-05 00:40:54.067919 | orchestrator | + echo '# PULL IMAGES' 2026-02-05 00:40:54.067930 | orchestrator | + echo 2026-02-05 00:40:54.069123 | orchestrator | ++ semver latest 7.0.0 2026-02-05 00:40:54.125287 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-05 00:40:54.125388 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-02-05 00:40:54.125424 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-05 00:40:55.936986 | orchestrator | 2026-02-05 00:40:55 | INFO  | Trying to run play pull-images in environment custom 2026-02-05 00:41:05.945106 | orchestrator | 2026-02-05 00:41:05 | INFO  | Prepare task for execution of pull-images. 2026-02-05 00:41:06.011315 | orchestrator | 2026-02-05 00:41:06 | INFO  | Task d2447863-528c-4730-b7f4-326924dd71f7 (pull-images) was prepared for execution. 2026-02-05 00:41:06.011391 | orchestrator | 2026-02-05 00:41:06 | INFO  | Task d2447863-528c-4730-b7f4-326924dd71f7 is running in background. No more output. Check ARA for logs. 2026-02-05 00:41:08.059130 | orchestrator | 2026-02-05 00:41:08 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-05 00:41:18.171036 | orchestrator | 2026-02-05 00:41:18 | INFO  | Prepare task for execution of wipe-partitions. 2026-02-05 00:41:18.275650 | orchestrator | 2026-02-05 00:41:18 | INFO  | Task 0c50911c-1ec9-49b0-90d3-7e43a4a85f06 (wipe-partitions) was prepared for execution. 2026-02-05 00:41:18.275737 | orchestrator | 2026-02-05 00:41:18 | INFO  | It takes a moment until task 0c50911c-1ec9-49b0-90d3-7e43a4a85f06 (wipe-partitions) has been started and output is visible here. 2026-02-05 00:41:30.297837 | orchestrator | 2026-02-05 00:41:30.297940 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-05 00:41:30.297956 | orchestrator | 2026-02-05 00:41:30.297974 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-05 00:41:30.297993 | orchestrator | Thursday 05 February 2026 00:41:21 +0000 (0:00:00.098) 0:00:00.098 ***** 2026-02-05 00:41:30.298089 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:41:30.298104 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:41:30.298114 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:41:30.298124 | orchestrator | 2026-02-05 00:41:30.298134 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-05 00:41:30.298144 | orchestrator | Thursday 05 February 2026 00:41:21 +0000 (0:00:00.523) 0:00:00.621 ***** 2026-02-05 00:41:30.298158 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:30.298168 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:30.298178 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:30.298188 | orchestrator | 2026-02-05 00:41:30.298197 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-05 00:41:30.298207 | orchestrator | Thursday 05 February 2026 00:41:22 +0000 (0:00:00.299) 0:00:00.921 ***** 2026-02-05 00:41:30.298217 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:41:30.298227 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:41:30.298237 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:41:30.298247 | orchestrator | 2026-02-05 00:41:30.298256 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-05 00:41:30.298266 | orchestrator | Thursday 05 February 2026 00:41:22 +0000 (0:00:00.533) 0:00:01.454 ***** 2026-02-05 00:41:30.298276 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:30.298286 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:30.298296 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:30.298305 | orchestrator | 2026-02-05 00:41:30.298315 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-05 00:41:30.298325 | orchestrator | Thursday 05 February 2026 00:41:23 +0000 (0:00:00.215) 0:00:01.669 ***** 2026-02-05 00:41:30.298335 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-05 00:41:30.298349 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-05 00:41:30.298359 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-05 00:41:30.298370 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-05 00:41:30.298382 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-05 00:41:30.298393 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-05 00:41:30.298404 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-05 00:41:30.298416 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-05 00:41:30.298427 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-05 00:41:30.298439 | orchestrator | 2026-02-05 00:41:30.298451 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-05 00:41:30.298462 | orchestrator | Thursday 05 February 2026 00:41:25 +0000 (0:00:02.205) 0:00:03.875 ***** 2026-02-05 00:41:30.298475 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-05 00:41:30.298488 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-05 00:41:30.298530 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-05 00:41:30.298549 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-05 00:41:30.298566 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-05 00:41:30.298583 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-05 00:41:30.298599 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-05 00:41:30.298614 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-05 00:41:30.298631 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-05 00:41:30.298650 | orchestrator | 2026-02-05 00:41:30.298671 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-05 00:41:30.298681 | orchestrator | Thursday 05 February 2026 00:41:26 +0000 (0:00:01.407) 0:00:05.282 ***** 2026-02-05 00:41:30.298691 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-05 00:41:30.298701 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-05 00:41:30.298711 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-05 00:41:30.298720 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-05 00:41:30.298742 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-05 00:41:30.298758 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-05 00:41:30.298774 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-05 00:41:30.298790 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-05 00:41:30.298808 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-05 00:41:30.298822 | orchestrator | 2026-02-05 00:41:30.298848 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-05 00:41:30.298858 | orchestrator | Thursday 05 February 2026 00:41:28 +0000 (0:00:02.091) 0:00:07.374 ***** 2026-02-05 00:41:30.298868 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:41:30.298891 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:41:30.298908 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:41:30.298924 | orchestrator | 2026-02-05 00:41:30.298939 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-05 00:41:30.298955 | orchestrator | Thursday 05 February 2026 00:41:29 +0000 (0:00:00.632) 0:00:08.007 ***** 2026-02-05 00:41:30.298971 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:41:30.298988 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:41:30.298998 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:41:30.299009 | orchestrator | 2026-02-05 00:41:30.299019 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:41:30.299030 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:41:30.299041 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:41:30.299069 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:41:30.299080 | orchestrator | 2026-02-05 00:41:30.299089 | orchestrator | 2026-02-05 00:41:30.299099 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:41:30.299109 | orchestrator | Thursday 05 February 2026 00:41:29 +0000 (0:00:00.613) 0:00:08.620 ***** 2026-02-05 00:41:30.299118 | orchestrator | =============================================================================== 2026-02-05 00:41:30.299127 | orchestrator | Check device availability ----------------------------------------------- 2.21s 2026-02-05 00:41:30.299137 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.09s 2026-02-05 00:41:30.299147 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.41s 2026-02-05 00:41:30.299156 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2026-02-05 00:41:30.299166 | orchestrator | Request device events from the kernel ----------------------------------- 0.61s 2026-02-05 00:41:30.299175 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.53s 2026-02-05 00:41:30.299185 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.52s 2026-02-05 00:41:30.299194 | orchestrator | Remove all rook related logical devices --------------------------------- 0.30s 2026-02-05 00:41:30.299204 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.22s 2026-02-05 00:41:42.668752 | orchestrator | 2026-02-05 00:41:42 | INFO  | Prepare task for execution of facts. 2026-02-05 00:41:42.734118 | orchestrator | 2026-02-05 00:41:42 | INFO  | Task 7e59fed7-a550-4007-b255-d8313b7709a7 (facts) was prepared for execution. 2026-02-05 00:41:42.734186 | orchestrator | 2026-02-05 00:41:42 | INFO  | It takes a moment until task 7e59fed7-a550-4007-b255-d8313b7709a7 (facts) has been started and output is visible here. 2026-02-05 00:41:55.324385 | orchestrator | 2026-02-05 00:41:55.324583 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-05 00:41:55.324605 | orchestrator | 2026-02-05 00:41:55.324647 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-05 00:41:55.324660 | orchestrator | Thursday 05 February 2026 00:41:46 +0000 (0:00:00.236) 0:00:00.236 ***** 2026-02-05 00:41:55.324671 | orchestrator | ok: [testbed-manager] 2026-02-05 00:41:55.324683 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:41:55.324694 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:41:55.324705 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:41:55.324716 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:41:55.324726 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:41:55.324737 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:41:55.324748 | orchestrator | 2026-02-05 00:41:55.324759 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-05 00:41:55.324770 | orchestrator | Thursday 05 February 2026 00:41:47 +0000 (0:00:00.978) 0:00:01.214 ***** 2026-02-05 00:41:55.324781 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:41:55.324793 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:41:55.324806 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:41:55.324826 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:41:55.324854 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:55.324874 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:55.324890 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:55.324908 | orchestrator | 2026-02-05 00:41:55.324927 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-05 00:41:55.324970 | orchestrator | 2026-02-05 00:41:55.324990 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 00:41:55.325010 | orchestrator | Thursday 05 February 2026 00:41:48 +0000 (0:00:01.101) 0:00:02.315 ***** 2026-02-05 00:41:55.325029 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:41:55.325049 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:41:55.325068 | orchestrator | ok: [testbed-manager] 2026-02-05 00:41:55.325086 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:41:55.325105 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:41:55.325124 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:41:55.325143 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:41:55.325162 | orchestrator | 2026-02-05 00:41:55.325177 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-05 00:41:55.325188 | orchestrator | 2026-02-05 00:41:55.325199 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-05 00:41:55.325211 | orchestrator | Thursday 05 February 2026 00:41:54 +0000 (0:00:05.673) 0:00:07.989 ***** 2026-02-05 00:41:55.325221 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:41:55.325233 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:41:55.325244 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:41:55.325254 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:41:55.325265 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:41:55.325276 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:41:55.325287 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:41:55.325298 | orchestrator | 2026-02-05 00:41:55.325309 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:41:55.325321 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:41:55.325334 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:41:55.325345 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:41:55.325356 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:41:55.325367 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:41:55.325390 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:41:55.325401 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:41:55.325412 | orchestrator | 2026-02-05 00:41:55.325423 | orchestrator | 2026-02-05 00:41:55.325434 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:41:55.325445 | orchestrator | Thursday 05 February 2026 00:41:55 +0000 (0:00:00.439) 0:00:08.429 ***** 2026-02-05 00:41:55.325456 | orchestrator | =============================================================================== 2026-02-05 00:41:55.325467 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.67s 2026-02-05 00:41:55.325479 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.10s 2026-02-05 00:41:55.325490 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.98s 2026-02-05 00:41:55.325530 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.44s 2026-02-05 00:41:57.374136 | orchestrator | 2026-02-05 00:41:57 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-02-05 00:41:57.430886 | orchestrator | 2026-02-05 00:41:57 | INFO  | Task 37e1f97b-a746-45b9-b154-fc841c7903c3 (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-05 00:41:57.430952 | orchestrator | 2026-02-05 00:41:57 | INFO  | It takes a moment until task 37e1f97b-a746-45b9-b154-fc841c7903c3 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-05 00:42:07.177006 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-05 00:42:07.177078 | orchestrator | 2.16.14 2026-02-05 00:42:07.177085 | orchestrator | 2026-02-05 00:42:07.177090 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-05 00:42:07.177095 | orchestrator | 2026-02-05 00:42:07.177099 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 00:42:07.177103 | orchestrator | Thursday 05 February 2026 00:42:01 +0000 (0:00:00.302) 0:00:00.302 ***** 2026-02-05 00:42:07.177108 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-05 00:42:07.177112 | orchestrator | 2026-02-05 00:42:07.177117 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-05 00:42:07.177121 | orchestrator | Thursday 05 February 2026 00:42:01 +0000 (0:00:00.214) 0:00:00.516 ***** 2026-02-05 00:42:07.177125 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:42:07.177129 | orchestrator | 2026-02-05 00:42:07.177133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:07.177137 | orchestrator | Thursday 05 February 2026 00:42:01 +0000 (0:00:00.243) 0:00:00.760 ***** 2026-02-05 00:42:07.177147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-05 00:42:07.177152 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-05 00:42:07.177156 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-05 00:42:07.177159 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-05 00:42:07.177163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-05 00:42:07.177167 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-05 00:42:07.177171 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-05 00:42:07.177175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-05 00:42:07.177179 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-05 00:42:07.177183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-05 00:42:07.177201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-05 00:42:07.177205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-05 00:42:07.177209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-05 00:42:07.177213 | orchestrator | 2026-02-05 00:42:07.177216 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:07.177220 | orchestrator | Thursday 05 February 2026 00:42:02 +0000 (0:00:00.404) 0:00:01.164 ***** 2026-02-05 00:42:07.177224 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:07.177228 | orchestrator | 2026-02-05 00:42:07.177232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:07.177236 | orchestrator | Thursday 05 February 2026 00:42:02 +0000 (0:00:00.172) 0:00:01.337 ***** 2026-02-05 00:42:07.177240 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:07.177244 | orchestrator | 2026-02-05 00:42:07.177247 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:07.177254 | orchestrator | Thursday 05 February 2026 00:42:02 +0000 (0:00:00.167) 0:00:01.504 ***** 2026-02-05 00:42:07.177258 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:07.177262 | orchestrator | 2026-02-05 00:42:07.177266 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:07.177270 | orchestrator | Thursday 05 February 2026 00:42:02 +0000 (0:00:00.184) 0:00:01.689 ***** 2026-02-05 00:42:07.177274 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:07.177278 | orchestrator | 2026-02-05 00:42:07.177282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:07.177285 | orchestrator | Thursday 05 February 2026 00:42:02 +0000 (0:00:00.157) 0:00:01.847 ***** 2026-02-05 00:42:07.177289 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:07.177293 | orchestrator | 2026-02-05 00:42:07.177297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:07.177300 | orchestrator | Thursday 05 February 2026 00:42:02 +0000 (0:00:00.176) 0:00:02.023 ***** 2026-02-05 00:42:07.177304 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:07.177308 | orchestrator | 2026-02-05 00:42:07.177312 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:07.177315 | orchestrator | Thursday 05 February 2026 00:42:03 +0000 (0:00:00.181) 0:00:02.205 ***** 2026-02-05 00:42:07.177319 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:07.177323 | orchestrator | 2026-02-05 00:42:07.177327 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:07.177331 | orchestrator | Thursday 05 February 2026 00:42:03 +0000 (0:00:00.172) 0:00:02.377 ***** 2026-02-05 00:42:07.177334 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:07.177338 | orchestrator | 2026-02-05 00:42:07.177342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:07.177346 | orchestrator | Thursday 05 February 2026 00:42:03 +0000 (0:00:00.154) 0:00:02.532 ***** 2026-02-05 00:42:07.177350 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58) 2026-02-05 00:42:07.177354 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58) 2026-02-05 00:42:07.177358 | orchestrator | 2026-02-05 00:42:07.177362 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:07.177375 | orchestrator | Thursday 05 February 2026 00:42:03 +0000 (0:00:00.336) 0:00:02.869 ***** 2026-02-05 00:42:07.177379 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b7bd6d63-837c-4716-bacc-a146e68be59b) 2026-02-05 00:42:07.177383 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b7bd6d63-837c-4716-bacc-a146e68be59b) 2026-02-05 00:42:07.177387 | orchestrator | 2026-02-05 00:42:07.177393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:07.177411 | orchestrator | Thursday 05 February 2026 00:42:04 +0000 (0:00:00.455) 0:00:03.324 ***** 2026-02-05 00:42:07.177415 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ea1b8944-91e5-47d3-baee-befb07fac7f3) 2026-02-05 00:42:07.177419 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ea1b8944-91e5-47d3-baee-befb07fac7f3) 2026-02-05 00:42:07.177423 | orchestrator | 2026-02-05 00:42:07.177427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:07.177431 | orchestrator | Thursday 05 February 2026 00:42:04 +0000 (0:00:00.438) 0:00:03.763 ***** 2026-02-05 00:42:07.177434 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_65c05e60-3149-4d51-82d7-128e0fd85726) 2026-02-05 00:42:07.177438 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_65c05e60-3149-4d51-82d7-128e0fd85726) 2026-02-05 00:42:07.177442 | orchestrator | 2026-02-05 00:42:07.177446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:07.177450 | orchestrator | Thursday 05 February 2026 00:42:05 +0000 (0:00:00.609) 0:00:04.372 ***** 2026-02-05 00:42:07.177453 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-05 00:42:07.177457 | orchestrator | 2026-02-05 00:42:07.177461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:07.177465 | orchestrator | Thursday 05 February 2026 00:42:05 +0000 (0:00:00.273) 0:00:04.645 ***** 2026-02-05 00:42:07.177468 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-05 00:42:07.177472 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-05 00:42:07.177476 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-05 00:42:07.177480 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-05 00:42:07.177483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-05 00:42:07.177487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-05 00:42:07.177507 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-05 00:42:07.177511 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-05 00:42:07.177515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-05 00:42:07.177518 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-05 00:42:07.177522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-05 00:42:07.177526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-05 00:42:07.177530 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-05 00:42:07.177534 | orchestrator | 2026-02-05 00:42:07.177538 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:07.177542 | orchestrator | Thursday 05 February 2026 00:42:05 +0000 (0:00:00.330) 0:00:04.976 ***** 2026-02-05 00:42:07.177546 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:07.177549 | orchestrator | 2026-02-05 00:42:07.177553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:07.177557 | orchestrator | Thursday 05 February 2026 00:42:06 +0000 (0:00:00.177) 0:00:05.154 ***** 2026-02-05 00:42:07.177561 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:07.177564 | orchestrator | 2026-02-05 00:42:07.177568 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:07.177572 | orchestrator | Thursday 05 February 2026 00:42:06 +0000 (0:00:00.175) 0:00:05.329 ***** 2026-02-05 00:42:07.177576 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:07.177583 | orchestrator | 2026-02-05 00:42:07.177587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:07.177591 | orchestrator | Thursday 05 February 2026 00:42:06 +0000 (0:00:00.170) 0:00:05.500 ***** 2026-02-05 00:42:07.177595 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:07.177598 | orchestrator | 2026-02-05 00:42:07.177602 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:07.177606 | orchestrator | Thursday 05 February 2026 00:42:06 +0000 (0:00:00.188) 0:00:05.688 ***** 2026-02-05 00:42:07.177610 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:07.177614 | orchestrator | 2026-02-05 00:42:07.177617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:07.177621 | orchestrator | Thursday 05 February 2026 00:42:06 +0000 (0:00:00.168) 0:00:05.857 ***** 2026-02-05 00:42:07.177625 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:07.177629 | orchestrator | 2026-02-05 00:42:07.177633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:07.177636 | orchestrator | Thursday 05 February 2026 00:42:06 +0000 (0:00:00.178) 0:00:06.035 ***** 2026-02-05 00:42:07.177640 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:07.177644 | orchestrator | 2026-02-05 00:42:07.177650 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:13.703421 | orchestrator | Thursday 05 February 2026 00:42:07 +0000 (0:00:00.178) 0:00:06.214 ***** 2026-02-05 00:42:13.703578 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:13.703598 | orchestrator | 2026-02-05 00:42:13.703611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:13.703623 | orchestrator | Thursday 05 February 2026 00:42:07 +0000 (0:00:00.176) 0:00:06.391 ***** 2026-02-05 00:42:13.703635 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-05 00:42:13.703647 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-05 00:42:13.703658 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-05 00:42:13.703676 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-05 00:42:13.703704 | orchestrator | 2026-02-05 00:42:13.703725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:13.703763 | orchestrator | Thursday 05 February 2026 00:42:08 +0000 (0:00:00.837) 0:00:07.228 ***** 2026-02-05 00:42:13.703781 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:13.703800 | orchestrator | 2026-02-05 00:42:13.703819 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:13.703838 | orchestrator | Thursday 05 February 2026 00:42:08 +0000 (0:00:00.194) 0:00:07.422 ***** 2026-02-05 00:42:13.703858 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:13.703877 | orchestrator | 2026-02-05 00:42:13.703895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:13.703910 | orchestrator | Thursday 05 February 2026 00:42:08 +0000 (0:00:00.179) 0:00:07.602 ***** 2026-02-05 00:42:13.703922 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:13.703933 | orchestrator | 2026-02-05 00:42:13.703947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:13.703960 | orchestrator | Thursday 05 February 2026 00:42:08 +0000 (0:00:00.192) 0:00:07.795 ***** 2026-02-05 00:42:13.703980 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:13.704009 | orchestrator | 2026-02-05 00:42:13.704028 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-05 00:42:13.704048 | orchestrator | Thursday 05 February 2026 00:42:08 +0000 (0:00:00.202) 0:00:07.997 ***** 2026-02-05 00:42:13.704065 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-05 00:42:13.704077 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-05 00:42:13.704088 | orchestrator | 2026-02-05 00:42:13.704099 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-05 00:42:13.704110 | orchestrator | Thursday 05 February 2026 00:42:09 +0000 (0:00:00.151) 0:00:08.149 ***** 2026-02-05 00:42:13.704146 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:13.704158 | orchestrator | 2026-02-05 00:42:13.704169 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-05 00:42:13.704180 | orchestrator | Thursday 05 February 2026 00:42:09 +0000 (0:00:00.127) 0:00:08.277 ***** 2026-02-05 00:42:13.704213 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:13.704224 | orchestrator | 2026-02-05 00:42:13.704234 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-05 00:42:13.704251 | orchestrator | Thursday 05 February 2026 00:42:09 +0000 (0:00:00.120) 0:00:08.397 ***** 2026-02-05 00:42:13.704267 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:13.704278 | orchestrator | 2026-02-05 00:42:13.704289 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-05 00:42:13.704300 | orchestrator | Thursday 05 February 2026 00:42:09 +0000 (0:00:00.124) 0:00:08.522 ***** 2026-02-05 00:42:13.704311 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:42:13.704322 | orchestrator | 2026-02-05 00:42:13.704333 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-05 00:42:13.704344 | orchestrator | Thursday 05 February 2026 00:42:09 +0000 (0:00:00.120) 0:00:08.642 ***** 2026-02-05 00:42:13.704356 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'}}) 2026-02-05 00:42:13.704368 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1b54f13f-3e23-5303-9525-7c2d84d571dd'}}) 2026-02-05 00:42:13.704379 | orchestrator | 2026-02-05 00:42:13.704390 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-05 00:42:13.704401 | orchestrator | Thursday 05 February 2026 00:42:09 +0000 (0:00:00.147) 0:00:08.789 ***** 2026-02-05 00:42:13.704419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'}})  2026-02-05 00:42:13.704442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1b54f13f-3e23-5303-9525-7c2d84d571dd'}})  2026-02-05 00:42:13.704468 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:13.704481 | orchestrator | 2026-02-05 00:42:13.704521 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-05 00:42:13.704538 | orchestrator | Thursday 05 February 2026 00:42:09 +0000 (0:00:00.139) 0:00:08.929 ***** 2026-02-05 00:42:13.704553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'}})  2026-02-05 00:42:13.704573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1b54f13f-3e23-5303-9525-7c2d84d571dd'}})  2026-02-05 00:42:13.704585 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:13.704596 | orchestrator | 2026-02-05 00:42:13.704607 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-05 00:42:13.704618 | orchestrator | Thursday 05 February 2026 00:42:10 +0000 (0:00:00.278) 0:00:09.207 ***** 2026-02-05 00:42:13.704629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'}})  2026-02-05 00:42:13.704658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1b54f13f-3e23-5303-9525-7c2d84d571dd'}})  2026-02-05 00:42:13.704670 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:13.704681 | orchestrator | 2026-02-05 00:42:13.704692 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-05 00:42:13.704703 | orchestrator | Thursday 05 February 2026 00:42:10 +0000 (0:00:00.137) 0:00:09.345 ***** 2026-02-05 00:42:13.704714 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:42:13.704725 | orchestrator | 2026-02-05 00:42:13.704736 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-05 00:42:13.704747 | orchestrator | Thursday 05 February 2026 00:42:10 +0000 (0:00:00.110) 0:00:09.456 ***** 2026-02-05 00:42:13.704758 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:42:13.704778 | orchestrator | 2026-02-05 00:42:13.704793 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-05 00:42:13.704811 | orchestrator | Thursday 05 February 2026 00:42:10 +0000 (0:00:00.123) 0:00:09.579 ***** 2026-02-05 00:42:13.704830 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:13.704842 | orchestrator | 2026-02-05 00:42:13.704853 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-05 00:42:13.704864 | orchestrator | Thursday 05 February 2026 00:42:10 +0000 (0:00:00.110) 0:00:09.690 ***** 2026-02-05 00:42:13.704875 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:13.704886 | orchestrator | 2026-02-05 00:42:13.704899 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-05 00:42:13.704917 | orchestrator | Thursday 05 February 2026 00:42:10 +0000 (0:00:00.117) 0:00:09.808 ***** 2026-02-05 00:42:13.704936 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:13.704950 | orchestrator | 2026-02-05 00:42:13.704961 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-05 00:42:13.704972 | orchestrator | Thursday 05 February 2026 00:42:10 +0000 (0:00:00.103) 0:00:09.912 ***** 2026-02-05 00:42:13.704982 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 00:42:13.704994 | orchestrator |  "ceph_osd_devices": { 2026-02-05 00:42:13.705005 | orchestrator |  "sdb": { 2026-02-05 00:42:13.705016 | orchestrator |  "osd_lvm_uuid": "9bc271eb-ec29-52a2-8b95-ff4dfb27e19f" 2026-02-05 00:42:13.705028 | orchestrator |  }, 2026-02-05 00:42:13.705039 | orchestrator |  "sdc": { 2026-02-05 00:42:13.705050 | orchestrator |  "osd_lvm_uuid": "1b54f13f-3e23-5303-9525-7c2d84d571dd" 2026-02-05 00:42:13.705061 | orchestrator |  } 2026-02-05 00:42:13.705072 | orchestrator |  } 2026-02-05 00:42:13.705083 | orchestrator | } 2026-02-05 00:42:13.705095 | orchestrator | 2026-02-05 00:42:13.705106 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-05 00:42:13.705117 | orchestrator | Thursday 05 February 2026 00:42:10 +0000 (0:00:00.112) 0:00:10.024 ***** 2026-02-05 00:42:13.705135 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:13.705148 | orchestrator | 2026-02-05 00:42:13.705160 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-05 00:42:13.705259 | orchestrator | Thursday 05 February 2026 00:42:11 +0000 (0:00:00.102) 0:00:10.127 ***** 2026-02-05 00:42:13.705276 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:13.705294 | orchestrator | 2026-02-05 00:42:13.705311 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-05 00:42:13.705326 | orchestrator | Thursday 05 February 2026 00:42:11 +0000 (0:00:00.113) 0:00:10.240 ***** 2026-02-05 00:42:13.705343 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:42:13.705361 | orchestrator | 2026-02-05 00:42:13.705380 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-05 00:42:13.705399 | orchestrator | Thursday 05 February 2026 00:42:11 +0000 (0:00:00.114) 0:00:10.355 ***** 2026-02-05 00:42:13.705417 | orchestrator | changed: [testbed-node-3] => { 2026-02-05 00:42:13.705432 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-05 00:42:13.705448 | orchestrator |  "ceph_osd_devices": { 2026-02-05 00:42:13.705466 | orchestrator |  "sdb": { 2026-02-05 00:42:13.705478 | orchestrator |  "osd_lvm_uuid": "9bc271eb-ec29-52a2-8b95-ff4dfb27e19f" 2026-02-05 00:42:13.705518 | orchestrator |  }, 2026-02-05 00:42:13.705531 | orchestrator |  "sdc": { 2026-02-05 00:42:13.705542 | orchestrator |  "osd_lvm_uuid": "1b54f13f-3e23-5303-9525-7c2d84d571dd" 2026-02-05 00:42:13.705552 | orchestrator |  } 2026-02-05 00:42:13.705563 | orchestrator |  }, 2026-02-05 00:42:13.705574 | orchestrator |  "lvm_volumes": [ 2026-02-05 00:42:13.705585 | orchestrator |  { 2026-02-05 00:42:13.705596 | orchestrator |  "data": "osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f", 2026-02-05 00:42:13.705607 | orchestrator |  "data_vg": "ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f" 2026-02-05 00:42:13.705628 | orchestrator |  }, 2026-02-05 00:42:13.705639 | orchestrator |  { 2026-02-05 00:42:13.705650 | orchestrator |  "data": "osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd", 2026-02-05 00:42:13.705661 | orchestrator |  "data_vg": "ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd" 2026-02-05 00:42:13.705672 | orchestrator |  } 2026-02-05 00:42:13.705683 | orchestrator |  ] 2026-02-05 00:42:13.705694 | orchestrator |  } 2026-02-05 00:42:13.705705 | orchestrator | } 2026-02-05 00:42:13.705716 | orchestrator | 2026-02-05 00:42:13.705727 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-05 00:42:13.705744 | orchestrator | Thursday 05 February 2026 00:42:11 +0000 (0:00:00.311) 0:00:10.667 ***** 2026-02-05 00:42:13.705762 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-05 00:42:13.705780 | orchestrator | 2026-02-05 00:42:13.705800 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-05 00:42:13.705819 | orchestrator | 2026-02-05 00:42:13.705837 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 00:42:13.705856 | orchestrator | Thursday 05 February 2026 00:42:13 +0000 (0:00:01.632) 0:00:12.300 ***** 2026-02-05 00:42:13.705971 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-05 00:42:13.705985 | orchestrator | 2026-02-05 00:42:13.706000 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-05 00:42:13.706064 | orchestrator | Thursday 05 February 2026 00:42:13 +0000 (0:00:00.233) 0:00:12.533 ***** 2026-02-05 00:42:13.706079 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:42:13.706090 | orchestrator | 2026-02-05 00:42:13.706115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:20.396666 | orchestrator | Thursday 05 February 2026 00:42:13 +0000 (0:00:00.204) 0:00:12.738 ***** 2026-02-05 00:42:20.396778 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-05 00:42:20.396799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-05 00:42:20.396820 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-05 00:42:20.396836 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-05 00:42:20.396847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-05 00:42:20.396858 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-05 00:42:20.396870 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-05 00:42:20.396885 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-05 00:42:20.396896 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-05 00:42:20.396908 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-05 00:42:20.396919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-05 00:42:20.396930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-05 00:42:20.396961 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-05 00:42:20.396974 | orchestrator | 2026-02-05 00:42:20.396986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:20.396997 | orchestrator | Thursday 05 February 2026 00:42:14 +0000 (0:00:00.332) 0:00:13.070 ***** 2026-02-05 00:42:20.397008 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:20.397020 | orchestrator | 2026-02-05 00:42:20.397031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:20.397042 | orchestrator | Thursday 05 February 2026 00:42:14 +0000 (0:00:00.176) 0:00:13.247 ***** 2026-02-05 00:42:20.397077 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:20.397089 | orchestrator | 2026-02-05 00:42:20.397101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:20.397112 | orchestrator | Thursday 05 February 2026 00:42:14 +0000 (0:00:00.180) 0:00:13.428 ***** 2026-02-05 00:42:20.397122 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:20.397133 | orchestrator | 2026-02-05 00:42:20.397144 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:20.397155 | orchestrator | Thursday 05 February 2026 00:42:14 +0000 (0:00:00.181) 0:00:13.610 ***** 2026-02-05 00:42:20.397166 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:20.397177 | orchestrator | 2026-02-05 00:42:20.397187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:20.397198 | orchestrator | Thursday 05 February 2026 00:42:14 +0000 (0:00:00.181) 0:00:13.791 ***** 2026-02-05 00:42:20.397209 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:20.397220 | orchestrator | 2026-02-05 00:42:20.397231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:20.397242 | orchestrator | Thursday 05 February 2026 00:42:15 +0000 (0:00:00.466) 0:00:14.257 ***** 2026-02-05 00:42:20.397253 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:20.397263 | orchestrator | 2026-02-05 00:42:20.397274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:20.397285 | orchestrator | Thursday 05 February 2026 00:42:15 +0000 (0:00:00.174) 0:00:14.432 ***** 2026-02-05 00:42:20.397296 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:20.397306 | orchestrator | 2026-02-05 00:42:20.397317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:20.397328 | orchestrator | Thursday 05 February 2026 00:42:15 +0000 (0:00:00.141) 0:00:14.573 ***** 2026-02-05 00:42:20.397339 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:20.397350 | orchestrator | 2026-02-05 00:42:20.397360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:20.397371 | orchestrator | Thursday 05 February 2026 00:42:15 +0000 (0:00:00.170) 0:00:14.743 ***** 2026-02-05 00:42:20.397382 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6) 2026-02-05 00:42:20.397394 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6) 2026-02-05 00:42:20.397405 | orchestrator | 2026-02-05 00:42:20.397416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:20.397427 | orchestrator | Thursday 05 February 2026 00:42:16 +0000 (0:00:00.366) 0:00:15.110 ***** 2026-02-05 00:42:20.397438 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_df9dffbb-fa4a-4614-acfc-458aacc61e85) 2026-02-05 00:42:20.397449 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_df9dffbb-fa4a-4614-acfc-458aacc61e85) 2026-02-05 00:42:20.397459 | orchestrator | 2026-02-05 00:42:20.397470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:20.397481 | orchestrator | Thursday 05 February 2026 00:42:16 +0000 (0:00:00.390) 0:00:15.500 ***** 2026-02-05 00:42:20.397517 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_36110d5e-3998-4d39-b163-f137840d584a) 2026-02-05 00:42:20.397528 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_36110d5e-3998-4d39-b163-f137840d584a) 2026-02-05 00:42:20.397539 | orchestrator | 2026-02-05 00:42:20.397551 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:20.397580 | orchestrator | Thursday 05 February 2026 00:42:16 +0000 (0:00:00.378) 0:00:15.878 ***** 2026-02-05 00:42:20.397592 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ba105820-b7fd-4d06-b751-3e65d5700a2c) 2026-02-05 00:42:20.397603 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ba105820-b7fd-4d06-b751-3e65d5700a2c) 2026-02-05 00:42:20.397614 | orchestrator | 2026-02-05 00:42:20.397633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:20.397644 | orchestrator | Thursday 05 February 2026 00:42:17 +0000 (0:00:00.385) 0:00:16.264 ***** 2026-02-05 00:42:20.397655 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-05 00:42:20.397666 | orchestrator | 2026-02-05 00:42:20.397677 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:20.397688 | orchestrator | Thursday 05 February 2026 00:42:17 +0000 (0:00:00.301) 0:00:16.566 ***** 2026-02-05 00:42:20.397699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-05 00:42:20.397710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-05 00:42:20.397727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-05 00:42:20.397739 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-05 00:42:20.397749 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-05 00:42:20.397760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-05 00:42:20.397771 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-05 00:42:20.397782 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-05 00:42:20.397792 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-05 00:42:20.397803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-05 00:42:20.397814 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-05 00:42:20.397825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-05 00:42:20.397836 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-05 00:42:20.397846 | orchestrator | 2026-02-05 00:42:20.397857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:20.397868 | orchestrator | Thursday 05 February 2026 00:42:17 +0000 (0:00:00.338) 0:00:16.904 ***** 2026-02-05 00:42:20.397879 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:20.397890 | orchestrator | 2026-02-05 00:42:20.397900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:20.397911 | orchestrator | Thursday 05 February 2026 00:42:18 +0000 (0:00:00.510) 0:00:17.415 ***** 2026-02-05 00:42:20.397922 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:20.397933 | orchestrator | 2026-02-05 00:42:20.397944 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:20.397955 | orchestrator | Thursday 05 February 2026 00:42:18 +0000 (0:00:00.174) 0:00:17.590 ***** 2026-02-05 00:42:20.397966 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:20.397976 | orchestrator | 2026-02-05 00:42:20.397987 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:20.397998 | orchestrator | Thursday 05 February 2026 00:42:18 +0000 (0:00:00.173) 0:00:17.763 ***** 2026-02-05 00:42:20.398009 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:20.398083 | orchestrator | 2026-02-05 00:42:20.398095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:20.398106 | orchestrator | Thursday 05 February 2026 00:42:18 +0000 (0:00:00.182) 0:00:17.945 ***** 2026-02-05 00:42:20.398117 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:20.398128 | orchestrator | 2026-02-05 00:42:20.398139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:20.398150 | orchestrator | Thursday 05 February 2026 00:42:19 +0000 (0:00:00.145) 0:00:18.091 ***** 2026-02-05 00:42:20.398161 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:20.398179 | orchestrator | 2026-02-05 00:42:20.398246 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:20.398260 | orchestrator | Thursday 05 February 2026 00:42:19 +0000 (0:00:00.164) 0:00:18.255 ***** 2026-02-05 00:42:20.398271 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:20.398282 | orchestrator | 2026-02-05 00:42:20.398293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:20.398304 | orchestrator | Thursday 05 February 2026 00:42:19 +0000 (0:00:00.142) 0:00:18.398 ***** 2026-02-05 00:42:20.398315 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:20.398326 | orchestrator | 2026-02-05 00:42:20.398337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:20.398348 | orchestrator | Thursday 05 February 2026 00:42:19 +0000 (0:00:00.174) 0:00:18.572 ***** 2026-02-05 00:42:20.398359 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-05 00:42:20.398371 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-05 00:42:20.398382 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-05 00:42:20.398394 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-05 00:42:20.398405 | orchestrator | 2026-02-05 00:42:20.398416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:20.398427 | orchestrator | Thursday 05 February 2026 00:42:20 +0000 (0:00:00.751) 0:00:19.323 ***** 2026-02-05 00:42:20.398438 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:26.480413 | orchestrator | 2026-02-05 00:42:26.480587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:26.480604 | orchestrator | Thursday 05 February 2026 00:42:20 +0000 (0:00:00.181) 0:00:19.504 ***** 2026-02-05 00:42:26.480612 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:26.480621 | orchestrator | 2026-02-05 00:42:26.480628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:26.480635 | orchestrator | Thursday 05 February 2026 00:42:20 +0000 (0:00:00.172) 0:00:19.677 ***** 2026-02-05 00:42:26.480642 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:26.480649 | orchestrator | 2026-02-05 00:42:26.480659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:26.480671 | orchestrator | Thursday 05 February 2026 00:42:20 +0000 (0:00:00.174) 0:00:19.851 ***** 2026-02-05 00:42:26.480685 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:26.480701 | orchestrator | 2026-02-05 00:42:26.480711 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-05 00:42:26.480722 | orchestrator | Thursday 05 February 2026 00:42:21 +0000 (0:00:00.613) 0:00:20.465 ***** 2026-02-05 00:42:26.480733 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-05 00:42:26.480744 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-05 00:42:26.480755 | orchestrator | 2026-02-05 00:42:26.480765 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-05 00:42:26.480795 | orchestrator | Thursday 05 February 2026 00:42:21 +0000 (0:00:00.185) 0:00:20.651 ***** 2026-02-05 00:42:26.480806 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:26.480817 | orchestrator | 2026-02-05 00:42:26.480828 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-05 00:42:26.480839 | orchestrator | Thursday 05 February 2026 00:42:21 +0000 (0:00:00.117) 0:00:20.768 ***** 2026-02-05 00:42:26.480848 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:26.480858 | orchestrator | 2026-02-05 00:42:26.480868 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-05 00:42:26.480883 | orchestrator | Thursday 05 February 2026 00:42:21 +0000 (0:00:00.089) 0:00:20.858 ***** 2026-02-05 00:42:26.480893 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:26.480903 | orchestrator | 2026-02-05 00:42:26.480913 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-05 00:42:26.480924 | orchestrator | Thursday 05 February 2026 00:42:21 +0000 (0:00:00.129) 0:00:20.987 ***** 2026-02-05 00:42:26.480955 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:42:26.480963 | orchestrator | 2026-02-05 00:42:26.480969 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-05 00:42:26.480975 | orchestrator | Thursday 05 February 2026 00:42:22 +0000 (0:00:00.120) 0:00:21.108 ***** 2026-02-05 00:42:26.480982 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'}}) 2026-02-05 00:42:26.480989 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a29ad6cb-22eb-5988-a460-3c83981a9937'}}) 2026-02-05 00:42:26.480995 | orchestrator | 2026-02-05 00:42:26.481001 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-05 00:42:26.481007 | orchestrator | Thursday 05 February 2026 00:42:22 +0000 (0:00:00.192) 0:00:21.301 ***** 2026-02-05 00:42:26.481014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'}})  2026-02-05 00:42:26.481022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a29ad6cb-22eb-5988-a460-3c83981a9937'}})  2026-02-05 00:42:26.481028 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:26.481035 | orchestrator | 2026-02-05 00:42:26.481041 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-05 00:42:26.481047 | orchestrator | Thursday 05 February 2026 00:42:22 +0000 (0:00:00.157) 0:00:21.458 ***** 2026-02-05 00:42:26.481053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'}})  2026-02-05 00:42:26.481059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a29ad6cb-22eb-5988-a460-3c83981a9937'}})  2026-02-05 00:42:26.481066 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:26.481072 | orchestrator | 2026-02-05 00:42:26.481079 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-05 00:42:26.481085 | orchestrator | Thursday 05 February 2026 00:42:22 +0000 (0:00:00.174) 0:00:21.633 ***** 2026-02-05 00:42:26.481091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'}})  2026-02-05 00:42:26.481097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a29ad6cb-22eb-5988-a460-3c83981a9937'}})  2026-02-05 00:42:26.481103 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:26.481109 | orchestrator | 2026-02-05 00:42:26.481115 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-05 00:42:26.481122 | orchestrator | Thursday 05 February 2026 00:42:22 +0000 (0:00:00.139) 0:00:21.773 ***** 2026-02-05 00:42:26.481128 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:42:26.481134 | orchestrator | 2026-02-05 00:42:26.481140 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-05 00:42:26.481146 | orchestrator | Thursday 05 February 2026 00:42:22 +0000 (0:00:00.141) 0:00:21.914 ***** 2026-02-05 00:42:26.481153 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:42:26.481159 | orchestrator | 2026-02-05 00:42:26.481165 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-05 00:42:26.481171 | orchestrator | Thursday 05 February 2026 00:42:23 +0000 (0:00:00.139) 0:00:22.053 ***** 2026-02-05 00:42:26.481193 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:26.481200 | orchestrator | 2026-02-05 00:42:26.481206 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-05 00:42:26.481212 | orchestrator | Thursday 05 February 2026 00:42:23 +0000 (0:00:00.249) 0:00:22.303 ***** 2026-02-05 00:42:26.481218 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:26.481224 | orchestrator | 2026-02-05 00:42:26.481231 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-05 00:42:26.481237 | orchestrator | Thursday 05 February 2026 00:42:23 +0000 (0:00:00.110) 0:00:22.413 ***** 2026-02-05 00:42:26.481243 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:26.481255 | orchestrator | 2026-02-05 00:42:26.481262 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-05 00:42:26.481268 | orchestrator | Thursday 05 February 2026 00:42:23 +0000 (0:00:00.140) 0:00:22.554 ***** 2026-02-05 00:42:26.481274 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 00:42:26.481281 | orchestrator |  "ceph_osd_devices": { 2026-02-05 00:42:26.481287 | orchestrator |  "sdb": { 2026-02-05 00:42:26.481294 | orchestrator |  "osd_lvm_uuid": "50aca8a8-e8e5-56ca-ab64-02beaf30ee0c" 2026-02-05 00:42:26.481300 | orchestrator |  }, 2026-02-05 00:42:26.481307 | orchestrator |  "sdc": { 2026-02-05 00:42:26.481313 | orchestrator |  "osd_lvm_uuid": "a29ad6cb-22eb-5988-a460-3c83981a9937" 2026-02-05 00:42:26.481320 | orchestrator |  } 2026-02-05 00:42:26.481326 | orchestrator |  } 2026-02-05 00:42:26.481332 | orchestrator | } 2026-02-05 00:42:26.481339 | orchestrator | 2026-02-05 00:42:26.481345 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-05 00:42:26.481351 | orchestrator | Thursday 05 February 2026 00:42:23 +0000 (0:00:00.178) 0:00:22.733 ***** 2026-02-05 00:42:26.481357 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:26.481363 | orchestrator | 2026-02-05 00:42:26.481370 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-05 00:42:26.481376 | orchestrator | Thursday 05 February 2026 00:42:23 +0000 (0:00:00.116) 0:00:22.849 ***** 2026-02-05 00:42:26.481382 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:26.481388 | orchestrator | 2026-02-05 00:42:26.481397 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-05 00:42:26.481408 | orchestrator | Thursday 05 February 2026 00:42:23 +0000 (0:00:00.144) 0:00:22.994 ***** 2026-02-05 00:42:26.481418 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:42:26.481428 | orchestrator | 2026-02-05 00:42:26.481438 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-05 00:42:26.481456 | orchestrator | Thursday 05 February 2026 00:42:24 +0000 (0:00:00.099) 0:00:23.093 ***** 2026-02-05 00:42:26.481466 | orchestrator | changed: [testbed-node-4] => { 2026-02-05 00:42:26.481475 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-05 00:42:26.481512 | orchestrator |  "ceph_osd_devices": { 2026-02-05 00:42:26.481522 | orchestrator |  "sdb": { 2026-02-05 00:42:26.481532 | orchestrator |  "osd_lvm_uuid": "50aca8a8-e8e5-56ca-ab64-02beaf30ee0c" 2026-02-05 00:42:26.481543 | orchestrator |  }, 2026-02-05 00:42:26.481553 | orchestrator |  "sdc": { 2026-02-05 00:42:26.481563 | orchestrator |  "osd_lvm_uuid": "a29ad6cb-22eb-5988-a460-3c83981a9937" 2026-02-05 00:42:26.481573 | orchestrator |  } 2026-02-05 00:42:26.481583 | orchestrator |  }, 2026-02-05 00:42:26.481595 | orchestrator |  "lvm_volumes": [ 2026-02-05 00:42:26.481605 | orchestrator |  { 2026-02-05 00:42:26.481615 | orchestrator |  "data": "osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c", 2026-02-05 00:42:26.481626 | orchestrator |  "data_vg": "ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c" 2026-02-05 00:42:26.481637 | orchestrator |  }, 2026-02-05 00:42:26.481648 | orchestrator |  { 2026-02-05 00:42:26.481657 | orchestrator |  "data": "osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937", 2026-02-05 00:42:26.481668 | orchestrator |  "data_vg": "ceph-a29ad6cb-22eb-5988-a460-3c83981a9937" 2026-02-05 00:42:26.481678 | orchestrator |  } 2026-02-05 00:42:26.481689 | orchestrator |  ] 2026-02-05 00:42:26.481699 | orchestrator |  } 2026-02-05 00:42:26.481710 | orchestrator | } 2026-02-05 00:42:26.481721 | orchestrator | 2026-02-05 00:42:26.481733 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-05 00:42:26.481743 | orchestrator | Thursday 05 February 2026 00:42:24 +0000 (0:00:00.193) 0:00:23.287 ***** 2026-02-05 00:42:26.481753 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-05 00:42:26.481763 | orchestrator | 2026-02-05 00:42:26.481785 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-05 00:42:26.481796 | orchestrator | 2026-02-05 00:42:26.481807 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 00:42:26.481817 | orchestrator | Thursday 05 February 2026 00:42:25 +0000 (0:00:01.006) 0:00:24.294 ***** 2026-02-05 00:42:26.481829 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-05 00:42:26.481839 | orchestrator | 2026-02-05 00:42:26.481850 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-05 00:42:26.481859 | orchestrator | Thursday 05 February 2026 00:42:25 +0000 (0:00:00.664) 0:00:24.958 ***** 2026-02-05 00:42:26.481869 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:42:26.481879 | orchestrator | 2026-02-05 00:42:26.481887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:26.481896 | orchestrator | Thursday 05 February 2026 00:42:26 +0000 (0:00:00.222) 0:00:25.181 ***** 2026-02-05 00:42:26.481905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-05 00:42:26.481915 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-05 00:42:26.481925 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-05 00:42:26.481936 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-05 00:42:26.481946 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-05 00:42:26.481966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-05 00:42:33.559991 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-05 00:42:33.560073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-05 00:42:33.560082 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-05 00:42:33.560089 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-05 00:42:33.560096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-05 00:42:33.560102 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-05 00:42:33.560108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-05 00:42:33.560115 | orchestrator | 2026-02-05 00:42:33.560123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:33.560130 | orchestrator | Thursday 05 February 2026 00:42:26 +0000 (0:00:00.420) 0:00:25.601 ***** 2026-02-05 00:42:33.560137 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.560144 | orchestrator | 2026-02-05 00:42:33.560151 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:33.560157 | orchestrator | Thursday 05 February 2026 00:42:26 +0000 (0:00:00.188) 0:00:25.790 ***** 2026-02-05 00:42:33.560163 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.560169 | orchestrator | 2026-02-05 00:42:33.560176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:33.560182 | orchestrator | Thursday 05 February 2026 00:42:26 +0000 (0:00:00.201) 0:00:25.991 ***** 2026-02-05 00:42:33.560188 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.560194 | orchestrator | 2026-02-05 00:42:33.560201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:33.560207 | orchestrator | Thursday 05 February 2026 00:42:27 +0000 (0:00:00.179) 0:00:26.171 ***** 2026-02-05 00:42:33.560213 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.560219 | orchestrator | 2026-02-05 00:42:33.560226 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:33.560232 | orchestrator | Thursday 05 February 2026 00:42:27 +0000 (0:00:00.172) 0:00:26.344 ***** 2026-02-05 00:42:33.560257 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.560263 | orchestrator | 2026-02-05 00:42:33.560270 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:33.560276 | orchestrator | Thursday 05 February 2026 00:42:27 +0000 (0:00:00.195) 0:00:26.539 ***** 2026-02-05 00:42:33.560282 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.560288 | orchestrator | 2026-02-05 00:42:33.560295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:33.560301 | orchestrator | Thursday 05 February 2026 00:42:27 +0000 (0:00:00.167) 0:00:26.707 ***** 2026-02-05 00:42:33.560307 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.560313 | orchestrator | 2026-02-05 00:42:33.560320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:33.560326 | orchestrator | Thursday 05 February 2026 00:42:27 +0000 (0:00:00.190) 0:00:26.898 ***** 2026-02-05 00:42:33.560332 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.560339 | orchestrator | 2026-02-05 00:42:33.560345 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:33.560351 | orchestrator | Thursday 05 February 2026 00:42:28 +0000 (0:00:00.188) 0:00:27.087 ***** 2026-02-05 00:42:33.560357 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11) 2026-02-05 00:42:33.560365 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11) 2026-02-05 00:42:33.560371 | orchestrator | 2026-02-05 00:42:33.560377 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:33.560383 | orchestrator | Thursday 05 February 2026 00:42:28 +0000 (0:00:00.740) 0:00:27.828 ***** 2026-02-05 00:42:33.560404 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5c16bdfb-9776-4282-a52f-d0746538d24f) 2026-02-05 00:42:33.560411 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5c16bdfb-9776-4282-a52f-d0746538d24f) 2026-02-05 00:42:33.560417 | orchestrator | 2026-02-05 00:42:33.560423 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:33.560429 | orchestrator | Thursday 05 February 2026 00:42:29 +0000 (0:00:00.497) 0:00:28.325 ***** 2026-02-05 00:42:33.560436 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_66450c46-76da-4fbd-b0f3-00f2a07ceccd) 2026-02-05 00:42:33.560442 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_66450c46-76da-4fbd-b0f3-00f2a07ceccd) 2026-02-05 00:42:33.560448 | orchestrator | 2026-02-05 00:42:33.560454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:33.560461 | orchestrator | Thursday 05 February 2026 00:42:29 +0000 (0:00:00.398) 0:00:28.724 ***** 2026-02-05 00:42:33.560467 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_13446d2e-9611-4725-bf6d-ec20aba1d1c7) 2026-02-05 00:42:33.560473 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_13446d2e-9611-4725-bf6d-ec20aba1d1c7) 2026-02-05 00:42:33.560519 | orchestrator | 2026-02-05 00:42:33.560527 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:42:33.560534 | orchestrator | Thursday 05 February 2026 00:42:30 +0000 (0:00:00.415) 0:00:29.139 ***** 2026-02-05 00:42:33.560540 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-05 00:42:33.560546 | orchestrator | 2026-02-05 00:42:33.560552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:33.560574 | orchestrator | Thursday 05 February 2026 00:42:30 +0000 (0:00:00.382) 0:00:29.522 ***** 2026-02-05 00:42:33.560582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-05 00:42:33.560590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-05 00:42:33.560597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-05 00:42:33.560605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-05 00:42:33.560618 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-05 00:42:33.560626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-05 00:42:33.560634 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-05 00:42:33.560642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-05 00:42:33.560649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-05 00:42:33.560657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-05 00:42:33.560665 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-05 00:42:33.560672 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-05 00:42:33.560679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-05 00:42:33.560686 | orchestrator | 2026-02-05 00:42:33.560694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:33.560702 | orchestrator | Thursday 05 February 2026 00:42:30 +0000 (0:00:00.282) 0:00:29.804 ***** 2026-02-05 00:42:33.560709 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.560716 | orchestrator | 2026-02-05 00:42:33.560724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:33.560732 | orchestrator | Thursday 05 February 2026 00:42:30 +0000 (0:00:00.167) 0:00:29.972 ***** 2026-02-05 00:42:33.560739 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.560747 | orchestrator | 2026-02-05 00:42:33.560755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:33.560761 | orchestrator | Thursday 05 February 2026 00:42:31 +0000 (0:00:00.162) 0:00:30.134 ***** 2026-02-05 00:42:33.560768 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.560774 | orchestrator | 2026-02-05 00:42:33.560781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:33.560788 | orchestrator | Thursday 05 February 2026 00:42:31 +0000 (0:00:00.175) 0:00:30.310 ***** 2026-02-05 00:42:33.560794 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.560801 | orchestrator | 2026-02-05 00:42:33.560807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:33.560814 | orchestrator | Thursday 05 February 2026 00:42:31 +0000 (0:00:00.173) 0:00:30.484 ***** 2026-02-05 00:42:33.560820 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.560826 | orchestrator | 2026-02-05 00:42:33.560833 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:33.560839 | orchestrator | Thursday 05 February 2026 00:42:31 +0000 (0:00:00.170) 0:00:30.655 ***** 2026-02-05 00:42:33.560846 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.560915 | orchestrator | 2026-02-05 00:42:33.560921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:33.560928 | orchestrator | Thursday 05 February 2026 00:42:32 +0000 (0:00:00.485) 0:00:31.140 ***** 2026-02-05 00:42:33.560934 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.560940 | orchestrator | 2026-02-05 00:42:33.560946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:33.560952 | orchestrator | Thursday 05 February 2026 00:42:32 +0000 (0:00:00.176) 0:00:31.317 ***** 2026-02-05 00:42:33.560958 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.560964 | orchestrator | 2026-02-05 00:42:33.560971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:33.560977 | orchestrator | Thursday 05 February 2026 00:42:32 +0000 (0:00:00.146) 0:00:31.463 ***** 2026-02-05 00:42:33.560983 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-05 00:42:33.560995 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-05 00:42:33.561002 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-05 00:42:33.561008 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-05 00:42:33.561014 | orchestrator | 2026-02-05 00:42:33.561020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:33.561026 | orchestrator | Thursday 05 February 2026 00:42:32 +0000 (0:00:00.537) 0:00:32.001 ***** 2026-02-05 00:42:33.561033 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.561039 | orchestrator | 2026-02-05 00:42:33.561045 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:33.561051 | orchestrator | Thursday 05 February 2026 00:42:33 +0000 (0:00:00.141) 0:00:32.143 ***** 2026-02-05 00:42:33.561057 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.561063 | orchestrator | 2026-02-05 00:42:33.561069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:33.561076 | orchestrator | Thursday 05 February 2026 00:42:33 +0000 (0:00:00.142) 0:00:32.286 ***** 2026-02-05 00:42:33.561082 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.561088 | orchestrator | 2026-02-05 00:42:33.561094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:42:33.561100 | orchestrator | Thursday 05 February 2026 00:42:33 +0000 (0:00:00.150) 0:00:32.436 ***** 2026-02-05 00:42:33.561106 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:33.561112 | orchestrator | 2026-02-05 00:42:33.561127 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-05 00:42:37.767018 | orchestrator | Thursday 05 February 2026 00:42:33 +0000 (0:00:00.160) 0:00:32.596 ***** 2026-02-05 00:42:37.767143 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-05 00:42:37.767159 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-05 00:42:37.767172 | orchestrator | 2026-02-05 00:42:37.767184 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-05 00:42:37.767195 | orchestrator | Thursday 05 February 2026 00:42:33 +0000 (0:00:00.211) 0:00:32.808 ***** 2026-02-05 00:42:37.767206 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:37.767218 | orchestrator | 2026-02-05 00:42:37.767229 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-05 00:42:37.767240 | orchestrator | Thursday 05 February 2026 00:42:33 +0000 (0:00:00.103) 0:00:32.913 ***** 2026-02-05 00:42:37.767267 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:37.767279 | orchestrator | 2026-02-05 00:42:37.767290 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-05 00:42:37.767301 | orchestrator | Thursday 05 February 2026 00:42:33 +0000 (0:00:00.099) 0:00:33.012 ***** 2026-02-05 00:42:37.767312 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:37.767322 | orchestrator | 2026-02-05 00:42:37.767334 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-05 00:42:37.767345 | orchestrator | Thursday 05 February 2026 00:42:34 +0000 (0:00:00.264) 0:00:33.277 ***** 2026-02-05 00:42:37.767357 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:42:37.767368 | orchestrator | 2026-02-05 00:42:37.767379 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-05 00:42:37.767390 | orchestrator | Thursday 05 February 2026 00:42:34 +0000 (0:00:00.114) 0:00:33.391 ***** 2026-02-05 00:42:37.767402 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '44714651-8fa8-5efe-842f-d8a32b49e267'}}) 2026-02-05 00:42:37.767418 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'}}) 2026-02-05 00:42:37.767429 | orchestrator | 2026-02-05 00:42:37.767440 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-05 00:42:37.767451 | orchestrator | Thursday 05 February 2026 00:42:34 +0000 (0:00:00.148) 0:00:33.540 ***** 2026-02-05 00:42:37.767463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '44714651-8fa8-5efe-842f-d8a32b49e267'}})  2026-02-05 00:42:37.767524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'}})  2026-02-05 00:42:37.767537 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:37.767548 | orchestrator | 2026-02-05 00:42:37.767560 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-05 00:42:37.767571 | orchestrator | Thursday 05 February 2026 00:42:34 +0000 (0:00:00.145) 0:00:33.685 ***** 2026-02-05 00:42:37.767582 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '44714651-8fa8-5efe-842f-d8a32b49e267'}})  2026-02-05 00:42:37.767593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'}})  2026-02-05 00:42:37.767604 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:37.767615 | orchestrator | 2026-02-05 00:42:37.767626 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-05 00:42:37.767637 | orchestrator | Thursday 05 February 2026 00:42:34 +0000 (0:00:00.158) 0:00:33.844 ***** 2026-02-05 00:42:37.767648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '44714651-8fa8-5efe-842f-d8a32b49e267'}})  2026-02-05 00:42:37.767659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'}})  2026-02-05 00:42:37.767670 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:37.767682 | orchestrator | 2026-02-05 00:42:37.767692 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-05 00:42:37.767703 | orchestrator | Thursday 05 February 2026 00:42:34 +0000 (0:00:00.182) 0:00:34.026 ***** 2026-02-05 00:42:37.767714 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:42:37.767725 | orchestrator | 2026-02-05 00:42:37.767736 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-05 00:42:37.767747 | orchestrator | Thursday 05 February 2026 00:42:35 +0000 (0:00:00.192) 0:00:34.219 ***** 2026-02-05 00:42:37.767758 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:42:37.767769 | orchestrator | 2026-02-05 00:42:37.767780 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-05 00:42:37.767791 | orchestrator | Thursday 05 February 2026 00:42:35 +0000 (0:00:00.156) 0:00:34.375 ***** 2026-02-05 00:42:37.767802 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:37.767813 | orchestrator | 2026-02-05 00:42:37.767823 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-05 00:42:37.767834 | orchestrator | Thursday 05 February 2026 00:42:35 +0000 (0:00:00.139) 0:00:34.515 ***** 2026-02-05 00:42:37.767845 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:37.767856 | orchestrator | 2026-02-05 00:42:37.767867 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-05 00:42:37.767878 | orchestrator | Thursday 05 February 2026 00:42:35 +0000 (0:00:00.183) 0:00:34.698 ***** 2026-02-05 00:42:37.767889 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:37.767899 | orchestrator | 2026-02-05 00:42:37.767911 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-05 00:42:37.767922 | orchestrator | Thursday 05 February 2026 00:42:35 +0000 (0:00:00.185) 0:00:34.884 ***** 2026-02-05 00:42:37.767933 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 00:42:37.767944 | orchestrator |  "ceph_osd_devices": { 2026-02-05 00:42:37.767955 | orchestrator |  "sdb": { 2026-02-05 00:42:37.767984 | orchestrator |  "osd_lvm_uuid": "44714651-8fa8-5efe-842f-d8a32b49e267" 2026-02-05 00:42:37.767997 | orchestrator |  }, 2026-02-05 00:42:37.768008 | orchestrator |  "sdc": { 2026-02-05 00:42:37.768020 | orchestrator |  "osd_lvm_uuid": "56069e6e-1b0b-5c3d-aabe-9f5e4e37a685" 2026-02-05 00:42:37.768031 | orchestrator |  } 2026-02-05 00:42:37.768042 | orchestrator |  } 2026-02-05 00:42:37.768053 | orchestrator | } 2026-02-05 00:42:37.768065 | orchestrator | 2026-02-05 00:42:37.768083 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-05 00:42:37.768094 | orchestrator | Thursday 05 February 2026 00:42:36 +0000 (0:00:00.183) 0:00:35.067 ***** 2026-02-05 00:42:37.768105 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:37.768116 | orchestrator | 2026-02-05 00:42:37.768127 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-05 00:42:37.768138 | orchestrator | Thursday 05 February 2026 00:42:36 +0000 (0:00:00.325) 0:00:35.392 ***** 2026-02-05 00:42:37.768149 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:37.768160 | orchestrator | 2026-02-05 00:42:37.768171 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-05 00:42:37.768181 | orchestrator | Thursday 05 February 2026 00:42:36 +0000 (0:00:00.121) 0:00:35.514 ***** 2026-02-05 00:42:37.768192 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:42:37.768203 | orchestrator | 2026-02-05 00:42:37.768214 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-05 00:42:37.768225 | orchestrator | Thursday 05 February 2026 00:42:36 +0000 (0:00:00.128) 0:00:35.642 ***** 2026-02-05 00:42:37.768236 | orchestrator | changed: [testbed-node-5] => { 2026-02-05 00:42:37.768247 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-05 00:42:37.768259 | orchestrator |  "ceph_osd_devices": { 2026-02-05 00:42:37.768270 | orchestrator |  "sdb": { 2026-02-05 00:42:37.768281 | orchestrator |  "osd_lvm_uuid": "44714651-8fa8-5efe-842f-d8a32b49e267" 2026-02-05 00:42:37.768292 | orchestrator |  }, 2026-02-05 00:42:37.768304 | orchestrator |  "sdc": { 2026-02-05 00:42:37.768315 | orchestrator |  "osd_lvm_uuid": "56069e6e-1b0b-5c3d-aabe-9f5e4e37a685" 2026-02-05 00:42:37.768326 | orchestrator |  } 2026-02-05 00:42:37.768337 | orchestrator |  }, 2026-02-05 00:42:37.768348 | orchestrator |  "lvm_volumes": [ 2026-02-05 00:42:37.768359 | orchestrator |  { 2026-02-05 00:42:37.768371 | orchestrator |  "data": "osd-block-44714651-8fa8-5efe-842f-d8a32b49e267", 2026-02-05 00:42:37.768382 | orchestrator |  "data_vg": "ceph-44714651-8fa8-5efe-842f-d8a32b49e267" 2026-02-05 00:42:37.768393 | orchestrator |  }, 2026-02-05 00:42:37.768408 | orchestrator |  { 2026-02-05 00:42:37.768420 | orchestrator |  "data": "osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685", 2026-02-05 00:42:37.768431 | orchestrator |  "data_vg": "ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685" 2026-02-05 00:42:37.768442 | orchestrator |  } 2026-02-05 00:42:37.768454 | orchestrator |  ] 2026-02-05 00:42:37.768465 | orchestrator |  } 2026-02-05 00:42:37.768529 | orchestrator | } 2026-02-05 00:42:37.768550 | orchestrator | 2026-02-05 00:42:37.768568 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-05 00:42:37.768586 | orchestrator | Thursday 05 February 2026 00:42:36 +0000 (0:00:00.214) 0:00:35.857 ***** 2026-02-05 00:42:37.768605 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-05 00:42:37.768617 | orchestrator | 2026-02-05 00:42:37.768627 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:42:37.768639 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 00:42:37.768651 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 00:42:37.768662 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 00:42:37.768672 | orchestrator | 2026-02-05 00:42:37.768683 | orchestrator | 2026-02-05 00:42:37.768694 | orchestrator | 2026-02-05 00:42:37.768705 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:42:37.768716 | orchestrator | Thursday 05 February 2026 00:42:37 +0000 (0:00:00.925) 0:00:36.782 ***** 2026-02-05 00:42:37.768736 | orchestrator | =============================================================================== 2026-02-05 00:42:37.768747 | orchestrator | Write configuration file ------------------------------------------------ 3.56s 2026-02-05 00:42:37.768758 | orchestrator | Add known links to the list of available block devices ------------------ 1.15s 2026-02-05 00:42:37.768776 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.11s 2026-02-05 00:42:37.768787 | orchestrator | Add known partitions to the list of available block devices ------------- 0.95s 2026-02-05 00:42:37.768798 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2026-02-05 00:42:37.768809 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-02-05 00:42:37.768820 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-02-05 00:42:37.768830 | orchestrator | Print configuration data ------------------------------------------------ 0.72s 2026-02-05 00:42:37.768841 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2026-02-05 00:42:37.768852 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2026-02-05 00:42:37.768870 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.61s 2026-02-05 00:42:37.768886 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-02-05 00:42:37.768897 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.55s 2026-02-05 00:42:37.768917 | orchestrator | Print WAL devices ------------------------------------------------------- 0.54s 2026-02-05 00:42:38.019623 | orchestrator | Add known partitions to the list of available block devices ------------- 0.54s 2026-02-05 00:42:38.019724 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.52s 2026-02-05 00:42:38.019740 | orchestrator | Add known partitions to the list of available block devices ------------- 0.51s 2026-02-05 00:42:38.019752 | orchestrator | Set DB devices config data ---------------------------------------------- 0.50s 2026-02-05 00:42:38.019764 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2026-02-05 00:42:38.019775 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.49s 2026-02-05 00:43:00.755177 | orchestrator | 2026-02-05 00:43:00 | INFO  | Task 9e64b1cf-006c-4b27-9d11-eb6cd318309d (sync inventory) is running in background. Output coming soon. 2026-02-05 00:43:24.929665 | orchestrator | 2026-02-05 00:43:02 | INFO  | Starting group_vars file reorganization 2026-02-05 00:43:24.929767 | orchestrator | 2026-02-05 00:43:02 | INFO  | Moved 0 file(s) to their respective directories 2026-02-05 00:43:24.929778 | orchestrator | 2026-02-05 00:43:02 | INFO  | Group_vars file reorganization completed 2026-02-05 00:43:24.929807 | orchestrator | 2026-02-05 00:43:04 | INFO  | Starting variable preparation from inventory 2026-02-05 00:43:24.929815 | orchestrator | 2026-02-05 00:43:07 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-05 00:43:24.929821 | orchestrator | 2026-02-05 00:43:07 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-05 00:43:24.929841 | orchestrator | 2026-02-05 00:43:07 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-05 00:43:24.929847 | orchestrator | 2026-02-05 00:43:07 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-05 00:43:24.929853 | orchestrator | 2026-02-05 00:43:07 | INFO  | Variable preparation completed 2026-02-05 00:43:24.929858 | orchestrator | 2026-02-05 00:43:08 | INFO  | Starting inventory overwrite handling 2026-02-05 00:43:24.929864 | orchestrator | 2026-02-05 00:43:08 | INFO  | Handling group overwrites in 99-overwrite 2026-02-05 00:43:24.929869 | orchestrator | 2026-02-05 00:43:08 | INFO  | Removing group frr:children from 60-generic 2026-02-05 00:43:24.929895 | orchestrator | 2026-02-05 00:43:08 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-05 00:43:24.929902 | orchestrator | 2026-02-05 00:43:08 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-05 00:43:24.929908 | orchestrator | 2026-02-05 00:43:08 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-05 00:43:24.929913 | orchestrator | 2026-02-05 00:43:08 | INFO  | Handling group overwrites in 20-roles 2026-02-05 00:43:24.929919 | orchestrator | 2026-02-05 00:43:08 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-05 00:43:24.929924 | orchestrator | 2026-02-05 00:43:08 | INFO  | Removed 5 group(s) in total 2026-02-05 00:43:24.929930 | orchestrator | 2026-02-05 00:43:08 | INFO  | Inventory overwrite handling completed 2026-02-05 00:43:24.929936 | orchestrator | 2026-02-05 00:43:09 | INFO  | Starting merge of inventory files 2026-02-05 00:43:24.929949 | orchestrator | 2026-02-05 00:43:09 | INFO  | Inventory files merged successfully 2026-02-05 00:43:24.929955 | orchestrator | 2026-02-05 00:43:13 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-05 00:43:24.929960 | orchestrator | 2026-02-05 00:43:23 | INFO  | Successfully wrote ClusterShell configuration 2026-02-05 00:43:24.929966 | orchestrator | [master 5782b20] 2026-02-05-00-43 2026-02-05 00:43:24.929973 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-05 00:43:26.986636 | orchestrator | 2026-02-05 00:43:26 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-02-05 00:43:27.043434 | orchestrator | 2026-02-05 00:43:27 | INFO  | Task 4672d9ac-93dc-43ff-a1cf-f2df222bcb52 (ceph-create-lvm-devices) was prepared for execution. 2026-02-05 00:43:27.043565 | orchestrator | 2026-02-05 00:43:27 | INFO  | It takes a moment until task 4672d9ac-93dc-43ff-a1cf-f2df222bcb52 (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-05 00:43:37.993390 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-05 00:43:37.993509 | orchestrator | 2.16.14 2026-02-05 00:43:37.993525 | orchestrator | 2026-02-05 00:43:37.993535 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-05 00:43:37.993546 | orchestrator | 2026-02-05 00:43:37.993561 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 00:43:37.993574 | orchestrator | Thursday 05 February 2026 00:43:31 +0000 (0:00:00.279) 0:00:00.279 ***** 2026-02-05 00:43:37.993589 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-05 00:43:37.993604 | orchestrator | 2026-02-05 00:43:37.993618 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-05 00:43:37.993633 | orchestrator | Thursday 05 February 2026 00:43:31 +0000 (0:00:00.221) 0:00:00.501 ***** 2026-02-05 00:43:37.993647 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:43:37.993661 | orchestrator | 2026-02-05 00:43:37.993675 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:37.993689 | orchestrator | Thursday 05 February 2026 00:43:31 +0000 (0:00:00.213) 0:00:00.714 ***** 2026-02-05 00:43:37.993704 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-05 00:43:37.993714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-05 00:43:37.993722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-05 00:43:37.993730 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-05 00:43:37.993738 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-05 00:43:37.993745 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-05 00:43:37.993753 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-05 00:43:37.993781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-05 00:43:37.993789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-05 00:43:37.993797 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-05 00:43:37.993805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-05 00:43:37.993813 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-05 00:43:37.993821 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-05 00:43:37.993829 | orchestrator | 2026-02-05 00:43:37.993837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:37.993844 | orchestrator | Thursday 05 February 2026 00:43:31 +0000 (0:00:00.464) 0:00:01.179 ***** 2026-02-05 00:43:37.993852 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:37.993860 | orchestrator | 2026-02-05 00:43:37.993868 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:37.993876 | orchestrator | Thursday 05 February 2026 00:43:32 +0000 (0:00:00.162) 0:00:01.341 ***** 2026-02-05 00:43:37.993884 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:37.993891 | orchestrator | 2026-02-05 00:43:37.993899 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:37.993907 | orchestrator | Thursday 05 February 2026 00:43:32 +0000 (0:00:00.172) 0:00:01.513 ***** 2026-02-05 00:43:37.993915 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:37.993922 | orchestrator | 2026-02-05 00:43:37.993930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:37.993938 | orchestrator | Thursday 05 February 2026 00:43:32 +0000 (0:00:00.175) 0:00:01.688 ***** 2026-02-05 00:43:37.993948 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:37.993957 | orchestrator | 2026-02-05 00:43:37.993966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:37.993975 | orchestrator | Thursday 05 February 2026 00:43:32 +0000 (0:00:00.185) 0:00:01.874 ***** 2026-02-05 00:43:37.993983 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:37.993993 | orchestrator | 2026-02-05 00:43:37.994002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:37.994069 | orchestrator | Thursday 05 February 2026 00:43:32 +0000 (0:00:00.195) 0:00:02.070 ***** 2026-02-05 00:43:37.994079 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:37.994089 | orchestrator | 2026-02-05 00:43:37.994104 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:37.994118 | orchestrator | Thursday 05 February 2026 00:43:33 +0000 (0:00:00.191) 0:00:02.261 ***** 2026-02-05 00:43:37.994133 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:37.994148 | orchestrator | 2026-02-05 00:43:37.994162 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:37.994172 | orchestrator | Thursday 05 February 2026 00:43:33 +0000 (0:00:00.191) 0:00:02.453 ***** 2026-02-05 00:43:37.994181 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:37.994190 | orchestrator | 2026-02-05 00:43:37.994199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:37.994209 | orchestrator | Thursday 05 February 2026 00:43:33 +0000 (0:00:00.184) 0:00:02.637 ***** 2026-02-05 00:43:37.994218 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58) 2026-02-05 00:43:37.994229 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58) 2026-02-05 00:43:37.994238 | orchestrator | 2026-02-05 00:43:37.994247 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:37.994275 | orchestrator | Thursday 05 February 2026 00:43:33 +0000 (0:00:00.424) 0:00:03.061 ***** 2026-02-05 00:43:37.994300 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b7bd6d63-837c-4716-bacc-a146e68be59b) 2026-02-05 00:43:37.994315 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b7bd6d63-837c-4716-bacc-a146e68be59b) 2026-02-05 00:43:37.994327 | orchestrator | 2026-02-05 00:43:37.994340 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:37.994353 | orchestrator | Thursday 05 February 2026 00:43:34 +0000 (0:00:00.520) 0:00:03.582 ***** 2026-02-05 00:43:37.994367 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ea1b8944-91e5-47d3-baee-befb07fac7f3) 2026-02-05 00:43:37.994380 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ea1b8944-91e5-47d3-baee-befb07fac7f3) 2026-02-05 00:43:37.994394 | orchestrator | 2026-02-05 00:43:37.994409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:37.994417 | orchestrator | Thursday 05 February 2026 00:43:34 +0000 (0:00:00.537) 0:00:04.120 ***** 2026-02-05 00:43:37.994425 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_65c05e60-3149-4d51-82d7-128e0fd85726) 2026-02-05 00:43:37.994433 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_65c05e60-3149-4d51-82d7-128e0fd85726) 2026-02-05 00:43:37.994441 | orchestrator | 2026-02-05 00:43:37.994448 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:37.994476 | orchestrator | Thursday 05 February 2026 00:43:35 +0000 (0:00:00.842) 0:00:04.962 ***** 2026-02-05 00:43:37.994484 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-05 00:43:37.994492 | orchestrator | 2026-02-05 00:43:37.994500 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:37.994508 | orchestrator | Thursday 05 February 2026 00:43:36 +0000 (0:00:00.340) 0:00:05.302 ***** 2026-02-05 00:43:37.994515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-05 00:43:37.994524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-05 00:43:37.994531 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-05 00:43:37.994539 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-05 00:43:37.994547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-05 00:43:37.994560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-05 00:43:37.994568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-05 00:43:37.994576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-05 00:43:37.994584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-05 00:43:37.994592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-05 00:43:37.994603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-05 00:43:37.994617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-05 00:43:37.994630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-05 00:43:37.994644 | orchestrator | 2026-02-05 00:43:37.994657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:37.994666 | orchestrator | Thursday 05 February 2026 00:43:36 +0000 (0:00:00.416) 0:00:05.719 ***** 2026-02-05 00:43:37.994673 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:37.994681 | orchestrator | 2026-02-05 00:43:37.994689 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:37.994697 | orchestrator | Thursday 05 February 2026 00:43:36 +0000 (0:00:00.216) 0:00:05.935 ***** 2026-02-05 00:43:37.994711 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:37.994719 | orchestrator | 2026-02-05 00:43:37.994727 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:37.994734 | orchestrator | Thursday 05 February 2026 00:43:36 +0000 (0:00:00.217) 0:00:06.153 ***** 2026-02-05 00:43:37.994742 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:37.994750 | orchestrator | 2026-02-05 00:43:37.994758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:37.994766 | orchestrator | Thursday 05 February 2026 00:43:37 +0000 (0:00:00.192) 0:00:06.346 ***** 2026-02-05 00:43:37.994773 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:37.994781 | orchestrator | 2026-02-05 00:43:37.994789 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:37.994797 | orchestrator | Thursday 05 February 2026 00:43:37 +0000 (0:00:00.180) 0:00:06.527 ***** 2026-02-05 00:43:37.994805 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:37.994812 | orchestrator | 2026-02-05 00:43:37.994820 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:37.994828 | orchestrator | Thursday 05 February 2026 00:43:37 +0000 (0:00:00.225) 0:00:06.752 ***** 2026-02-05 00:43:37.994836 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:37.994843 | orchestrator | 2026-02-05 00:43:37.994851 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:37.994859 | orchestrator | Thursday 05 February 2026 00:43:37 +0000 (0:00:00.194) 0:00:06.946 ***** 2026-02-05 00:43:37.994867 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:37.994875 | orchestrator | 2026-02-05 00:43:37.994888 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:45.923592 | orchestrator | Thursday 05 February 2026 00:43:37 +0000 (0:00:00.226) 0:00:07.173 ***** 2026-02-05 00:43:45.923703 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:45.923721 | orchestrator | 2026-02-05 00:43:45.923734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:45.923746 | orchestrator | Thursday 05 February 2026 00:43:38 +0000 (0:00:00.183) 0:00:07.356 ***** 2026-02-05 00:43:45.923758 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-05 00:43:45.923770 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-05 00:43:45.923781 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-05 00:43:45.923792 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-05 00:43:45.923803 | orchestrator | 2026-02-05 00:43:45.923815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:45.923826 | orchestrator | Thursday 05 February 2026 00:43:39 +0000 (0:00:01.165) 0:00:08.521 ***** 2026-02-05 00:43:45.923837 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:45.923849 | orchestrator | 2026-02-05 00:43:45.923860 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:45.923872 | orchestrator | Thursday 05 February 2026 00:43:39 +0000 (0:00:00.190) 0:00:08.712 ***** 2026-02-05 00:43:45.923883 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:45.923894 | orchestrator | 2026-02-05 00:43:45.923905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:45.923916 | orchestrator | Thursday 05 February 2026 00:43:39 +0000 (0:00:00.196) 0:00:08.908 ***** 2026-02-05 00:43:45.923927 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:45.923938 | orchestrator | 2026-02-05 00:43:45.923950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:43:45.923961 | orchestrator | Thursday 05 February 2026 00:43:39 +0000 (0:00:00.189) 0:00:09.098 ***** 2026-02-05 00:43:45.923972 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:45.923983 | orchestrator | 2026-02-05 00:43:45.923994 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-05 00:43:45.924005 | orchestrator | Thursday 05 February 2026 00:43:40 +0000 (0:00:00.176) 0:00:09.274 ***** 2026-02-05 00:43:45.924016 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:45.924051 | orchestrator | 2026-02-05 00:43:45.924065 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-05 00:43:45.924078 | orchestrator | Thursday 05 February 2026 00:43:40 +0000 (0:00:00.127) 0:00:09.401 ***** 2026-02-05 00:43:45.924092 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'}}) 2026-02-05 00:43:45.924106 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1b54f13f-3e23-5303-9525-7c2d84d571dd'}}) 2026-02-05 00:43:45.924118 | orchestrator | 2026-02-05 00:43:45.924132 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-05 00:43:45.924145 | orchestrator | Thursday 05 February 2026 00:43:40 +0000 (0:00:00.173) 0:00:09.575 ***** 2026-02-05 00:43:45.924160 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'}) 2026-02-05 00:43:45.924174 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'}) 2026-02-05 00:43:45.924187 | orchestrator | 2026-02-05 00:43:45.924201 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-05 00:43:45.924214 | orchestrator | Thursday 05 February 2026 00:43:42 +0000 (0:00:01.971) 0:00:11.546 ***** 2026-02-05 00:43:45.924228 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:45.924242 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:45.924255 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:45.924268 | orchestrator | 2026-02-05 00:43:45.924282 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-05 00:43:45.924296 | orchestrator | Thursday 05 February 2026 00:43:42 +0000 (0:00:00.145) 0:00:11.692 ***** 2026-02-05 00:43:45.924309 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'}) 2026-02-05 00:43:45.924322 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'}) 2026-02-05 00:43:45.924334 | orchestrator | 2026-02-05 00:43:45.924364 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-05 00:43:45.924377 | orchestrator | Thursday 05 February 2026 00:43:44 +0000 (0:00:01.510) 0:00:13.203 ***** 2026-02-05 00:43:45.924391 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:45.924405 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:45.924416 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:45.924427 | orchestrator | 2026-02-05 00:43:45.924438 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-05 00:43:45.924450 | orchestrator | Thursday 05 February 2026 00:43:44 +0000 (0:00:00.153) 0:00:13.357 ***** 2026-02-05 00:43:45.924502 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:45.924514 | orchestrator | 2026-02-05 00:43:45.924526 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-05 00:43:45.924537 | orchestrator | Thursday 05 February 2026 00:43:44 +0000 (0:00:00.134) 0:00:13.491 ***** 2026-02-05 00:43:45.924548 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:45.924559 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:45.924579 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:45.924590 | orchestrator | 2026-02-05 00:43:45.924602 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-05 00:43:45.924613 | orchestrator | Thursday 05 February 2026 00:43:44 +0000 (0:00:00.289) 0:00:13.781 ***** 2026-02-05 00:43:45.924623 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:45.924634 | orchestrator | 2026-02-05 00:43:45.924645 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-05 00:43:45.924656 | orchestrator | Thursday 05 February 2026 00:43:44 +0000 (0:00:00.112) 0:00:13.894 ***** 2026-02-05 00:43:45.924667 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:45.924678 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:45.924689 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:45.924700 | orchestrator | 2026-02-05 00:43:45.924711 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-05 00:43:45.924722 | orchestrator | Thursday 05 February 2026 00:43:44 +0000 (0:00:00.148) 0:00:14.042 ***** 2026-02-05 00:43:45.924733 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:45.924744 | orchestrator | 2026-02-05 00:43:45.924754 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-05 00:43:45.924765 | orchestrator | Thursday 05 February 2026 00:43:44 +0000 (0:00:00.133) 0:00:14.176 ***** 2026-02-05 00:43:45.924777 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:45.924793 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:45.924805 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:45.924816 | orchestrator | 2026-02-05 00:43:45.924827 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-05 00:43:45.924838 | orchestrator | Thursday 05 February 2026 00:43:45 +0000 (0:00:00.146) 0:00:14.323 ***** 2026-02-05 00:43:45.924849 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:43:45.924860 | orchestrator | 2026-02-05 00:43:45.924871 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-05 00:43:45.924882 | orchestrator | Thursday 05 February 2026 00:43:45 +0000 (0:00:00.151) 0:00:14.475 ***** 2026-02-05 00:43:45.924893 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:45.924905 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:45.924916 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:45.924927 | orchestrator | 2026-02-05 00:43:45.924938 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-05 00:43:45.924949 | orchestrator | Thursday 05 February 2026 00:43:45 +0000 (0:00:00.165) 0:00:14.640 ***** 2026-02-05 00:43:45.924960 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:45.924971 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:45.924983 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:45.924994 | orchestrator | 2026-02-05 00:43:45.925004 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-05 00:43:45.925022 | orchestrator | Thursday 05 February 2026 00:43:45 +0000 (0:00:00.145) 0:00:14.786 ***** 2026-02-05 00:43:45.925033 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:45.925044 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:45.925056 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:45.925067 | orchestrator | 2026-02-05 00:43:45.925077 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-05 00:43:45.925088 | orchestrator | Thursday 05 February 2026 00:43:45 +0000 (0:00:00.162) 0:00:14.948 ***** 2026-02-05 00:43:45.925100 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:45.925111 | orchestrator | 2026-02-05 00:43:45.925122 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-05 00:43:45.925140 | orchestrator | Thursday 05 February 2026 00:43:45 +0000 (0:00:00.153) 0:00:15.102 ***** 2026-02-05 00:43:52.279339 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.279617 | orchestrator | 2026-02-05 00:43:52.279660 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-05 00:43:52.279684 | orchestrator | Thursday 05 February 2026 00:43:46 +0000 (0:00:00.152) 0:00:15.255 ***** 2026-02-05 00:43:52.279705 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.279724 | orchestrator | 2026-02-05 00:43:52.279743 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-05 00:43:52.279756 | orchestrator | Thursday 05 February 2026 00:43:46 +0000 (0:00:00.155) 0:00:15.410 ***** 2026-02-05 00:43:52.279767 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 00:43:52.279779 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-05 00:43:52.279790 | orchestrator | } 2026-02-05 00:43:52.279801 | orchestrator | 2026-02-05 00:43:52.279813 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-05 00:43:52.279824 | orchestrator | Thursday 05 February 2026 00:43:46 +0000 (0:00:00.370) 0:00:15.780 ***** 2026-02-05 00:43:52.279834 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 00:43:52.279847 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-05 00:43:52.279858 | orchestrator | } 2026-02-05 00:43:52.279869 | orchestrator | 2026-02-05 00:43:52.279879 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-05 00:43:52.279890 | orchestrator | Thursday 05 February 2026 00:43:46 +0000 (0:00:00.165) 0:00:15.946 ***** 2026-02-05 00:43:52.279901 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 00:43:52.279913 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-05 00:43:52.279924 | orchestrator | } 2026-02-05 00:43:52.279935 | orchestrator | 2026-02-05 00:43:52.279946 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-05 00:43:52.279957 | orchestrator | Thursday 05 February 2026 00:43:46 +0000 (0:00:00.189) 0:00:16.135 ***** 2026-02-05 00:43:52.279968 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:43:52.279979 | orchestrator | 2026-02-05 00:43:52.279990 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-05 00:43:52.280001 | orchestrator | Thursday 05 February 2026 00:43:47 +0000 (0:00:00.961) 0:00:17.096 ***** 2026-02-05 00:43:52.280012 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:43:52.280023 | orchestrator | 2026-02-05 00:43:52.280034 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-05 00:43:52.280045 | orchestrator | Thursday 05 February 2026 00:43:48 +0000 (0:00:00.556) 0:00:17.652 ***** 2026-02-05 00:43:52.280056 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:43:52.280066 | orchestrator | 2026-02-05 00:43:52.280077 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-05 00:43:52.280088 | orchestrator | Thursday 05 February 2026 00:43:49 +0000 (0:00:00.539) 0:00:18.192 ***** 2026-02-05 00:43:52.280099 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:43:52.280110 | orchestrator | 2026-02-05 00:43:52.280153 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-05 00:43:52.280165 | orchestrator | Thursday 05 February 2026 00:43:49 +0000 (0:00:00.144) 0:00:18.336 ***** 2026-02-05 00:43:52.280176 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.280187 | orchestrator | 2026-02-05 00:43:52.280198 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-05 00:43:52.280209 | orchestrator | Thursday 05 February 2026 00:43:49 +0000 (0:00:00.106) 0:00:18.443 ***** 2026-02-05 00:43:52.280220 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.280231 | orchestrator | 2026-02-05 00:43:52.280242 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-05 00:43:52.280253 | orchestrator | Thursday 05 February 2026 00:43:49 +0000 (0:00:00.098) 0:00:18.541 ***** 2026-02-05 00:43:52.280264 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 00:43:52.280275 | orchestrator |  "vgs_report": { 2026-02-05 00:43:52.280287 | orchestrator |  "vg": [] 2026-02-05 00:43:52.280298 | orchestrator |  } 2026-02-05 00:43:52.280310 | orchestrator | } 2026-02-05 00:43:52.280321 | orchestrator | 2026-02-05 00:43:52.280332 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-05 00:43:52.280343 | orchestrator | Thursday 05 February 2026 00:43:49 +0000 (0:00:00.121) 0:00:18.662 ***** 2026-02-05 00:43:52.280353 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.280364 | orchestrator | 2026-02-05 00:43:52.280375 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-05 00:43:52.280386 | orchestrator | Thursday 05 February 2026 00:43:49 +0000 (0:00:00.117) 0:00:18.779 ***** 2026-02-05 00:43:52.280397 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.280408 | orchestrator | 2026-02-05 00:43:52.280419 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-05 00:43:52.280430 | orchestrator | Thursday 05 February 2026 00:43:49 +0000 (0:00:00.120) 0:00:18.900 ***** 2026-02-05 00:43:52.280443 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.280492 | orchestrator | 2026-02-05 00:43:52.280511 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-05 00:43:52.280530 | orchestrator | Thursday 05 February 2026 00:43:49 +0000 (0:00:00.248) 0:00:19.149 ***** 2026-02-05 00:43:52.280549 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.280569 | orchestrator | 2026-02-05 00:43:52.280588 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-05 00:43:52.280606 | orchestrator | Thursday 05 February 2026 00:43:50 +0000 (0:00:00.129) 0:00:19.279 ***** 2026-02-05 00:43:52.280623 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.280634 | orchestrator | 2026-02-05 00:43:52.280645 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-05 00:43:52.280656 | orchestrator | Thursday 05 February 2026 00:43:50 +0000 (0:00:00.126) 0:00:19.405 ***** 2026-02-05 00:43:52.280667 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.280677 | orchestrator | 2026-02-05 00:43:52.280688 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-05 00:43:52.280699 | orchestrator | Thursday 05 February 2026 00:43:50 +0000 (0:00:00.126) 0:00:19.532 ***** 2026-02-05 00:43:52.280710 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.280720 | orchestrator | 2026-02-05 00:43:52.280731 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-05 00:43:52.280742 | orchestrator | Thursday 05 February 2026 00:43:50 +0000 (0:00:00.130) 0:00:19.662 ***** 2026-02-05 00:43:52.280778 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.280790 | orchestrator | 2026-02-05 00:43:52.280809 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-05 00:43:52.280827 | orchestrator | Thursday 05 February 2026 00:43:50 +0000 (0:00:00.125) 0:00:19.788 ***** 2026-02-05 00:43:52.280844 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.280863 | orchestrator | 2026-02-05 00:43:52.280881 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-05 00:43:52.280915 | orchestrator | Thursday 05 February 2026 00:43:50 +0000 (0:00:00.125) 0:00:19.913 ***** 2026-02-05 00:43:52.280932 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.280942 | orchestrator | 2026-02-05 00:43:52.280953 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-05 00:43:52.280964 | orchestrator | Thursday 05 February 2026 00:43:50 +0000 (0:00:00.136) 0:00:20.049 ***** 2026-02-05 00:43:52.280975 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.280986 | orchestrator | 2026-02-05 00:43:52.281016 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-05 00:43:52.281028 | orchestrator | Thursday 05 February 2026 00:43:51 +0000 (0:00:00.147) 0:00:20.197 ***** 2026-02-05 00:43:52.281039 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.281050 | orchestrator | 2026-02-05 00:43:52.281061 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-05 00:43:52.281072 | orchestrator | Thursday 05 February 2026 00:43:51 +0000 (0:00:00.124) 0:00:20.322 ***** 2026-02-05 00:43:52.281083 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.281093 | orchestrator | 2026-02-05 00:43:52.281104 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-05 00:43:52.281115 | orchestrator | Thursday 05 February 2026 00:43:51 +0000 (0:00:00.133) 0:00:20.455 ***** 2026-02-05 00:43:52.281126 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.281136 | orchestrator | 2026-02-05 00:43:52.281147 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-05 00:43:52.281158 | orchestrator | Thursday 05 February 2026 00:43:51 +0000 (0:00:00.122) 0:00:20.577 ***** 2026-02-05 00:43:52.281171 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:52.281183 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:52.281194 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.281205 | orchestrator | 2026-02-05 00:43:52.281216 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-05 00:43:52.281232 | orchestrator | Thursday 05 February 2026 00:43:51 +0000 (0:00:00.281) 0:00:20.859 ***** 2026-02-05 00:43:52.281244 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:52.281255 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:52.281266 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.281277 | orchestrator | 2026-02-05 00:43:52.281287 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-05 00:43:52.281298 | orchestrator | Thursday 05 February 2026 00:43:51 +0000 (0:00:00.142) 0:00:21.002 ***** 2026-02-05 00:43:52.281309 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:52.281320 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:52.281331 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.281347 | orchestrator | 2026-02-05 00:43:52.281365 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-05 00:43:52.281384 | orchestrator | Thursday 05 February 2026 00:43:51 +0000 (0:00:00.148) 0:00:21.151 ***** 2026-02-05 00:43:52.281403 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:52.281422 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:52.281482 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.281503 | orchestrator | 2026-02-05 00:43:52.281521 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-05 00:43:52.281538 | orchestrator | Thursday 05 February 2026 00:43:52 +0000 (0:00:00.133) 0:00:21.284 ***** 2026-02-05 00:43:52.281555 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:52.281571 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:52.281589 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:52.281606 | orchestrator | 2026-02-05 00:43:52.281624 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-05 00:43:52.281643 | orchestrator | Thursday 05 February 2026 00:43:52 +0000 (0:00:00.133) 0:00:21.417 ***** 2026-02-05 00:43:52.281675 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:56.999601 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:56.999709 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:56.999728 | orchestrator | 2026-02-05 00:43:56.999739 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-05 00:43:56.999752 | orchestrator | Thursday 05 February 2026 00:43:52 +0000 (0:00:00.118) 0:00:21.535 ***** 2026-02-05 00:43:56.999763 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:56.999773 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:56.999784 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:56.999795 | orchestrator | 2026-02-05 00:43:56.999806 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-05 00:43:56.999817 | orchestrator | Thursday 05 February 2026 00:43:52 +0000 (0:00:00.170) 0:00:21.705 ***** 2026-02-05 00:43:56.999828 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:56.999839 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:56.999849 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:56.999859 | orchestrator | 2026-02-05 00:43:56.999869 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-05 00:43:56.999880 | orchestrator | Thursday 05 February 2026 00:43:52 +0000 (0:00:00.121) 0:00:21.826 ***** 2026-02-05 00:43:56.999890 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:43:56.999901 | orchestrator | 2026-02-05 00:43:56.999912 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-05 00:43:56.999923 | orchestrator | Thursday 05 February 2026 00:43:53 +0000 (0:00:00.523) 0:00:22.350 ***** 2026-02-05 00:43:56.999933 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:43:56.999944 | orchestrator | 2026-02-05 00:43:56.999954 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-05 00:43:56.999981 | orchestrator | Thursday 05 February 2026 00:43:53 +0000 (0:00:00.511) 0:00:22.861 ***** 2026-02-05 00:43:56.999993 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:43:57.000004 | orchestrator | 2026-02-05 00:43:57.000015 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-05 00:43:57.000026 | orchestrator | Thursday 05 February 2026 00:43:53 +0000 (0:00:00.132) 0:00:22.994 ***** 2026-02-05 00:43:57.000066 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'vg_name': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'}) 2026-02-05 00:43:57.000079 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'vg_name': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'}) 2026-02-05 00:43:57.000089 | orchestrator | 2026-02-05 00:43:57.000099 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-05 00:43:57.000110 | orchestrator | Thursday 05 February 2026 00:43:53 +0000 (0:00:00.176) 0:00:23.171 ***** 2026-02-05 00:43:57.000121 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:57.000131 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:57.000141 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:57.000152 | orchestrator | 2026-02-05 00:43:57.000163 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-05 00:43:57.000174 | orchestrator | Thursday 05 February 2026 00:43:54 +0000 (0:00:00.298) 0:00:23.469 ***** 2026-02-05 00:43:57.000185 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:57.000196 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:57.000206 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:57.000216 | orchestrator | 2026-02-05 00:43:57.000227 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-05 00:43:57.000238 | orchestrator | Thursday 05 February 2026 00:43:54 +0000 (0:00:00.146) 0:00:23.616 ***** 2026-02-05 00:43:57.000249 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'})  2026-02-05 00:43:57.000261 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'})  2026-02-05 00:43:57.000271 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:43:57.000282 | orchestrator | 2026-02-05 00:43:57.000292 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-05 00:43:57.000303 | orchestrator | Thursday 05 February 2026 00:43:54 +0000 (0:00:00.146) 0:00:23.763 ***** 2026-02-05 00:43:57.000332 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 00:43:57.000344 | orchestrator |  "lvm_report": { 2026-02-05 00:43:57.000356 | orchestrator |  "lv": [ 2026-02-05 00:43:57.000367 | orchestrator |  { 2026-02-05 00:43:57.000377 | orchestrator |  "lv_name": "osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd", 2026-02-05 00:43:57.000389 | orchestrator |  "vg_name": "ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd" 2026-02-05 00:43:57.000400 | orchestrator |  }, 2026-02-05 00:43:57.000410 | orchestrator |  { 2026-02-05 00:43:57.000421 | orchestrator |  "lv_name": "osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f", 2026-02-05 00:43:57.000433 | orchestrator |  "vg_name": "ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f" 2026-02-05 00:43:57.000443 | orchestrator |  } 2026-02-05 00:43:57.000495 | orchestrator |  ], 2026-02-05 00:43:57.000503 | orchestrator |  "pv": [ 2026-02-05 00:43:57.000510 | orchestrator |  { 2026-02-05 00:43:57.000517 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-05 00:43:57.000525 | orchestrator |  "vg_name": "ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f" 2026-02-05 00:43:57.000533 | orchestrator |  }, 2026-02-05 00:43:57.000540 | orchestrator |  { 2026-02-05 00:43:57.000554 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-05 00:43:57.000561 | orchestrator |  "vg_name": "ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd" 2026-02-05 00:43:57.000567 | orchestrator |  } 2026-02-05 00:43:57.000573 | orchestrator |  ] 2026-02-05 00:43:57.000579 | orchestrator |  } 2026-02-05 00:43:57.000586 | orchestrator | } 2026-02-05 00:43:57.000592 | orchestrator | 2026-02-05 00:43:57.000598 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-05 00:43:57.000604 | orchestrator | 2026-02-05 00:43:57.000611 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 00:43:57.000617 | orchestrator | Thursday 05 February 2026 00:43:54 +0000 (0:00:00.263) 0:00:24.026 ***** 2026-02-05 00:43:57.000623 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-05 00:43:57.000629 | orchestrator | 2026-02-05 00:43:57.000636 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-05 00:43:57.000642 | orchestrator | Thursday 05 February 2026 00:43:55 +0000 (0:00:00.237) 0:00:24.264 ***** 2026-02-05 00:43:57.000648 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:43:57.000654 | orchestrator | 2026-02-05 00:43:57.000660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:57.000666 | orchestrator | Thursday 05 February 2026 00:43:55 +0000 (0:00:00.212) 0:00:24.476 ***** 2026-02-05 00:43:57.000673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-05 00:43:57.000680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-05 00:43:57.000686 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-05 00:43:57.000693 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-05 00:43:57.000699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-05 00:43:57.000705 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-05 00:43:57.000711 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-05 00:43:57.000717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-05 00:43:57.000724 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-05 00:43:57.000821 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-05 00:43:57.000829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-05 00:43:57.000835 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-05 00:43:57.000842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-05 00:43:57.000848 | orchestrator | 2026-02-05 00:43:57.000854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:57.000860 | orchestrator | Thursday 05 February 2026 00:43:55 +0000 (0:00:00.365) 0:00:24.841 ***** 2026-02-05 00:43:57.000867 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:57.000873 | orchestrator | 2026-02-05 00:43:57.000879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:57.000894 | orchestrator | Thursday 05 February 2026 00:43:55 +0000 (0:00:00.189) 0:00:25.031 ***** 2026-02-05 00:43:57.000900 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:57.000907 | orchestrator | 2026-02-05 00:43:57.000913 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:57.000919 | orchestrator | Thursday 05 February 2026 00:43:56 +0000 (0:00:00.170) 0:00:25.201 ***** 2026-02-05 00:43:57.000925 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:57.000936 | orchestrator | 2026-02-05 00:43:57.000946 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:57.000964 | orchestrator | Thursday 05 February 2026 00:43:56 +0000 (0:00:00.472) 0:00:25.674 ***** 2026-02-05 00:43:57.000974 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:57.000984 | orchestrator | 2026-02-05 00:43:57.000994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:57.001004 | orchestrator | Thursday 05 February 2026 00:43:56 +0000 (0:00:00.159) 0:00:25.834 ***** 2026-02-05 00:43:57.001016 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:57.001027 | orchestrator | 2026-02-05 00:43:57.001037 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:43:57.001043 | orchestrator | Thursday 05 February 2026 00:43:56 +0000 (0:00:00.184) 0:00:26.018 ***** 2026-02-05 00:43:57.001050 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:43:57.001056 | orchestrator | 2026-02-05 00:43:57.001071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:07.278111 | orchestrator | Thursday 05 February 2026 00:43:56 +0000 (0:00:00.162) 0:00:26.181 ***** 2026-02-05 00:44:07.278197 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:07.278210 | orchestrator | 2026-02-05 00:44:07.278217 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:07.278225 | orchestrator | Thursday 05 February 2026 00:43:57 +0000 (0:00:00.163) 0:00:26.345 ***** 2026-02-05 00:44:07.278231 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:07.278238 | orchestrator | 2026-02-05 00:44:07.278245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:07.278251 | orchestrator | Thursday 05 February 2026 00:43:57 +0000 (0:00:00.159) 0:00:26.505 ***** 2026-02-05 00:44:07.278258 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6) 2026-02-05 00:44:07.278265 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6) 2026-02-05 00:44:07.278271 | orchestrator | 2026-02-05 00:44:07.278278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:07.278284 | orchestrator | Thursday 05 February 2026 00:43:57 +0000 (0:00:00.402) 0:00:26.907 ***** 2026-02-05 00:44:07.278290 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_df9dffbb-fa4a-4614-acfc-458aacc61e85) 2026-02-05 00:44:07.278297 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_df9dffbb-fa4a-4614-acfc-458aacc61e85) 2026-02-05 00:44:07.278303 | orchestrator | 2026-02-05 00:44:07.278309 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:07.278316 | orchestrator | Thursday 05 February 2026 00:43:58 +0000 (0:00:00.372) 0:00:27.280 ***** 2026-02-05 00:44:07.278322 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_36110d5e-3998-4d39-b163-f137840d584a) 2026-02-05 00:44:07.278328 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_36110d5e-3998-4d39-b163-f137840d584a) 2026-02-05 00:44:07.278334 | orchestrator | 2026-02-05 00:44:07.278340 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:07.278347 | orchestrator | Thursday 05 February 2026 00:43:58 +0000 (0:00:00.419) 0:00:27.700 ***** 2026-02-05 00:44:07.278366 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ba105820-b7fd-4d06-b751-3e65d5700a2c) 2026-02-05 00:44:07.278375 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ba105820-b7fd-4d06-b751-3e65d5700a2c) 2026-02-05 00:44:07.278382 | orchestrator | 2026-02-05 00:44:07.278389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:07.278393 | orchestrator | Thursday 05 February 2026 00:43:59 +0000 (0:00:00.553) 0:00:28.253 ***** 2026-02-05 00:44:07.278397 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-05 00:44:07.278401 | orchestrator | 2026-02-05 00:44:07.278405 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:07.278409 | orchestrator | Thursday 05 February 2026 00:43:59 +0000 (0:00:00.575) 0:00:28.828 ***** 2026-02-05 00:44:07.278430 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-05 00:44:07.278435 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-05 00:44:07.278439 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-05 00:44:07.278442 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-05 00:44:07.278484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-05 00:44:07.278489 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-05 00:44:07.278493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-05 00:44:07.278496 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-05 00:44:07.278500 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-05 00:44:07.278504 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-05 00:44:07.278508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-05 00:44:07.278512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-05 00:44:07.278515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-05 00:44:07.278520 | orchestrator | 2026-02-05 00:44:07.278523 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:07.278527 | orchestrator | Thursday 05 February 2026 00:44:00 +0000 (0:00:00.751) 0:00:29.580 ***** 2026-02-05 00:44:07.278531 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:07.278535 | orchestrator | 2026-02-05 00:44:07.278538 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:07.278542 | orchestrator | Thursday 05 February 2026 00:44:00 +0000 (0:00:00.175) 0:00:29.756 ***** 2026-02-05 00:44:07.278546 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:07.278550 | orchestrator | 2026-02-05 00:44:07.278553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:07.278557 | orchestrator | Thursday 05 February 2026 00:44:00 +0000 (0:00:00.184) 0:00:29.940 ***** 2026-02-05 00:44:07.278561 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:07.278565 | orchestrator | 2026-02-05 00:44:07.278580 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:07.278584 | orchestrator | Thursday 05 February 2026 00:44:00 +0000 (0:00:00.172) 0:00:30.113 ***** 2026-02-05 00:44:07.278588 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:07.278592 | orchestrator | 2026-02-05 00:44:07.278596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:07.278600 | orchestrator | Thursday 05 February 2026 00:44:01 +0000 (0:00:00.188) 0:00:30.302 ***** 2026-02-05 00:44:07.278605 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:07.278609 | orchestrator | 2026-02-05 00:44:07.278613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:07.278618 | orchestrator | Thursday 05 February 2026 00:44:01 +0000 (0:00:00.185) 0:00:30.487 ***** 2026-02-05 00:44:07.278622 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:07.278627 | orchestrator | 2026-02-05 00:44:07.278631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:07.278635 | orchestrator | Thursday 05 February 2026 00:44:01 +0000 (0:00:00.181) 0:00:30.669 ***** 2026-02-05 00:44:07.278640 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:07.278644 | orchestrator | 2026-02-05 00:44:07.278648 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:07.278653 | orchestrator | Thursday 05 February 2026 00:44:01 +0000 (0:00:00.190) 0:00:30.859 ***** 2026-02-05 00:44:07.278662 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:07.278667 | orchestrator | 2026-02-05 00:44:07.278671 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:07.278677 | orchestrator | Thursday 05 February 2026 00:44:01 +0000 (0:00:00.196) 0:00:31.055 ***** 2026-02-05 00:44:07.278683 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-05 00:44:07.278689 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-05 00:44:07.278695 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-05 00:44:07.278703 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-05 00:44:07.278712 | orchestrator | 2026-02-05 00:44:07.278721 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:07.278727 | orchestrator | Thursday 05 February 2026 00:44:02 +0000 (0:00:00.739) 0:00:31.795 ***** 2026-02-05 00:44:07.278733 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:07.278740 | orchestrator | 2026-02-05 00:44:07.278746 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:07.278752 | orchestrator | Thursday 05 February 2026 00:44:02 +0000 (0:00:00.182) 0:00:31.978 ***** 2026-02-05 00:44:07.278763 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:07.278769 | orchestrator | 2026-02-05 00:44:07.278776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:07.278781 | orchestrator | Thursday 05 February 2026 00:44:03 +0000 (0:00:00.456) 0:00:32.434 ***** 2026-02-05 00:44:07.278787 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:07.278792 | orchestrator | 2026-02-05 00:44:07.278798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:07.278803 | orchestrator | Thursday 05 February 2026 00:44:03 +0000 (0:00:00.181) 0:00:32.616 ***** 2026-02-05 00:44:07.278810 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:07.278816 | orchestrator | 2026-02-05 00:44:07.278822 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-05 00:44:07.278828 | orchestrator | Thursday 05 February 2026 00:44:03 +0000 (0:00:00.191) 0:00:32.807 ***** 2026-02-05 00:44:07.278834 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:07.278840 | orchestrator | 2026-02-05 00:44:07.278845 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-05 00:44:07.278851 | orchestrator | Thursday 05 February 2026 00:44:03 +0000 (0:00:00.124) 0:00:32.931 ***** 2026-02-05 00:44:07.278857 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'}}) 2026-02-05 00:44:07.278863 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a29ad6cb-22eb-5988-a460-3c83981a9937'}}) 2026-02-05 00:44:07.278869 | orchestrator | 2026-02-05 00:44:07.278875 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-05 00:44:07.278881 | orchestrator | Thursday 05 February 2026 00:44:03 +0000 (0:00:00.177) 0:00:33.109 ***** 2026-02-05 00:44:07.278889 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'}) 2026-02-05 00:44:07.278896 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'}) 2026-02-05 00:44:07.278903 | orchestrator | 2026-02-05 00:44:07.278909 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-05 00:44:07.278915 | orchestrator | Thursday 05 February 2026 00:44:05 +0000 (0:00:01.851) 0:00:34.960 ***** 2026-02-05 00:44:07.278921 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:07.278928 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:07.278940 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:07.278947 | orchestrator | 2026-02-05 00:44:07.278953 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-05 00:44:07.278960 | orchestrator | Thursday 05 February 2026 00:44:05 +0000 (0:00:00.143) 0:00:35.104 ***** 2026-02-05 00:44:07.278964 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'}) 2026-02-05 00:44:07.278974 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'}) 2026-02-05 00:44:12.610250 | orchestrator | 2026-02-05 00:44:12.610322 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-05 00:44:12.610330 | orchestrator | Thursday 05 February 2026 00:44:07 +0000 (0:00:01.429) 0:00:36.534 ***** 2026-02-05 00:44:12.610335 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:12.610342 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:12.610347 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.610352 | orchestrator | 2026-02-05 00:44:12.610357 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-05 00:44:12.610362 | orchestrator | Thursday 05 February 2026 00:44:07 +0000 (0:00:00.138) 0:00:36.672 ***** 2026-02-05 00:44:12.610366 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.610370 | orchestrator | 2026-02-05 00:44:12.610375 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-05 00:44:12.610379 | orchestrator | Thursday 05 February 2026 00:44:07 +0000 (0:00:00.128) 0:00:36.801 ***** 2026-02-05 00:44:12.610384 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:12.610388 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:12.610392 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.610397 | orchestrator | 2026-02-05 00:44:12.610401 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-05 00:44:12.610406 | orchestrator | Thursday 05 February 2026 00:44:07 +0000 (0:00:00.140) 0:00:36.942 ***** 2026-02-05 00:44:12.610410 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.610414 | orchestrator | 2026-02-05 00:44:12.610419 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-05 00:44:12.610423 | orchestrator | Thursday 05 February 2026 00:44:07 +0000 (0:00:00.129) 0:00:37.071 ***** 2026-02-05 00:44:12.610428 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:12.610432 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:12.610437 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.610441 | orchestrator | 2026-02-05 00:44:12.610479 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-05 00:44:12.610484 | orchestrator | Thursday 05 February 2026 00:44:08 +0000 (0:00:00.256) 0:00:37.328 ***** 2026-02-05 00:44:12.610488 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.610493 | orchestrator | 2026-02-05 00:44:12.610497 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-05 00:44:12.610502 | orchestrator | Thursday 05 February 2026 00:44:08 +0000 (0:00:00.126) 0:00:37.454 ***** 2026-02-05 00:44:12.610506 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:12.610528 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:12.610533 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.610538 | orchestrator | 2026-02-05 00:44:12.610543 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-05 00:44:12.610559 | orchestrator | Thursday 05 February 2026 00:44:08 +0000 (0:00:00.139) 0:00:37.594 ***** 2026-02-05 00:44:12.610564 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:44:12.610569 | orchestrator | 2026-02-05 00:44:12.610573 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-05 00:44:12.610578 | orchestrator | Thursday 05 February 2026 00:44:08 +0000 (0:00:00.132) 0:00:37.727 ***** 2026-02-05 00:44:12.610582 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:12.610587 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:12.610591 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.610595 | orchestrator | 2026-02-05 00:44:12.610600 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-05 00:44:12.610604 | orchestrator | Thursday 05 February 2026 00:44:08 +0000 (0:00:00.136) 0:00:37.864 ***** 2026-02-05 00:44:12.610608 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:12.610613 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:12.610617 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.610622 | orchestrator | 2026-02-05 00:44:12.610626 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-05 00:44:12.610642 | orchestrator | Thursday 05 February 2026 00:44:08 +0000 (0:00:00.138) 0:00:38.002 ***** 2026-02-05 00:44:12.610646 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:12.610651 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:12.610655 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.610660 | orchestrator | 2026-02-05 00:44:12.610664 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-05 00:44:12.610668 | orchestrator | Thursday 05 February 2026 00:44:08 +0000 (0:00:00.140) 0:00:38.142 ***** 2026-02-05 00:44:12.610673 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.610677 | orchestrator | 2026-02-05 00:44:12.610682 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-05 00:44:12.610686 | orchestrator | Thursday 05 February 2026 00:44:09 +0000 (0:00:00.129) 0:00:38.272 ***** 2026-02-05 00:44:12.610691 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.610695 | orchestrator | 2026-02-05 00:44:12.610699 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-05 00:44:12.610704 | orchestrator | Thursday 05 February 2026 00:44:09 +0000 (0:00:00.127) 0:00:38.400 ***** 2026-02-05 00:44:12.610708 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.610713 | orchestrator | 2026-02-05 00:44:12.610717 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-05 00:44:12.610721 | orchestrator | Thursday 05 February 2026 00:44:09 +0000 (0:00:00.131) 0:00:38.531 ***** 2026-02-05 00:44:12.610726 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 00:44:12.610731 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-05 00:44:12.610739 | orchestrator | } 2026-02-05 00:44:12.610744 | orchestrator | 2026-02-05 00:44:12.610748 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-05 00:44:12.610753 | orchestrator | Thursday 05 February 2026 00:44:09 +0000 (0:00:00.127) 0:00:38.658 ***** 2026-02-05 00:44:12.610757 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 00:44:12.610761 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-05 00:44:12.610766 | orchestrator | } 2026-02-05 00:44:12.610770 | orchestrator | 2026-02-05 00:44:12.610777 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-05 00:44:12.610782 | orchestrator | Thursday 05 February 2026 00:44:09 +0000 (0:00:00.119) 0:00:38.778 ***** 2026-02-05 00:44:12.610786 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 00:44:12.610791 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-05 00:44:12.610795 | orchestrator | } 2026-02-05 00:44:12.610799 | orchestrator | 2026-02-05 00:44:12.610804 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-05 00:44:12.610809 | orchestrator | Thursday 05 February 2026 00:44:09 +0000 (0:00:00.255) 0:00:39.033 ***** 2026-02-05 00:44:12.610813 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:44:12.610817 | orchestrator | 2026-02-05 00:44:12.610823 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-05 00:44:12.610828 | orchestrator | Thursday 05 February 2026 00:44:10 +0000 (0:00:00.576) 0:00:39.610 ***** 2026-02-05 00:44:12.610833 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:44:12.610838 | orchestrator | 2026-02-05 00:44:12.610843 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-05 00:44:12.610848 | orchestrator | Thursday 05 February 2026 00:44:10 +0000 (0:00:00.515) 0:00:40.126 ***** 2026-02-05 00:44:12.610853 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:44:12.610858 | orchestrator | 2026-02-05 00:44:12.610863 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-05 00:44:12.610869 | orchestrator | Thursday 05 February 2026 00:44:11 +0000 (0:00:00.515) 0:00:40.641 ***** 2026-02-05 00:44:12.610874 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:44:12.610879 | orchestrator | 2026-02-05 00:44:12.610884 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-05 00:44:12.610889 | orchestrator | Thursday 05 February 2026 00:44:11 +0000 (0:00:00.142) 0:00:40.784 ***** 2026-02-05 00:44:12.610894 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.610899 | orchestrator | 2026-02-05 00:44:12.610904 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-05 00:44:12.610909 | orchestrator | Thursday 05 February 2026 00:44:11 +0000 (0:00:00.126) 0:00:40.910 ***** 2026-02-05 00:44:12.610914 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.610919 | orchestrator | 2026-02-05 00:44:12.610924 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-05 00:44:12.610929 | orchestrator | Thursday 05 February 2026 00:44:11 +0000 (0:00:00.127) 0:00:41.038 ***** 2026-02-05 00:44:12.610934 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 00:44:12.610939 | orchestrator |  "vgs_report": { 2026-02-05 00:44:12.610945 | orchestrator |  "vg": [] 2026-02-05 00:44:12.610951 | orchestrator |  } 2026-02-05 00:44:12.610956 | orchestrator | } 2026-02-05 00:44:12.610961 | orchestrator | 2026-02-05 00:44:12.610966 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-05 00:44:12.610971 | orchestrator | Thursday 05 February 2026 00:44:12 +0000 (0:00:00.147) 0:00:41.186 ***** 2026-02-05 00:44:12.610976 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.610981 | orchestrator | 2026-02-05 00:44:12.610987 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-05 00:44:12.610991 | orchestrator | Thursday 05 February 2026 00:44:12 +0000 (0:00:00.138) 0:00:41.325 ***** 2026-02-05 00:44:12.610996 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.611001 | orchestrator | 2026-02-05 00:44:12.611006 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-05 00:44:12.611015 | orchestrator | Thursday 05 February 2026 00:44:12 +0000 (0:00:00.154) 0:00:41.480 ***** 2026-02-05 00:44:12.611020 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.611025 | orchestrator | 2026-02-05 00:44:12.611031 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-05 00:44:12.611036 | orchestrator | Thursday 05 February 2026 00:44:12 +0000 (0:00:00.167) 0:00:41.648 ***** 2026-02-05 00:44:12.611040 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:12.611045 | orchestrator | 2026-02-05 00:44:12.611052 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-05 00:44:16.958845 | orchestrator | Thursday 05 February 2026 00:44:12 +0000 (0:00:00.139) 0:00:41.788 ***** 2026-02-05 00:44:16.958989 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.959015 | orchestrator | 2026-02-05 00:44:16.959034 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-05 00:44:16.959051 | orchestrator | Thursday 05 February 2026 00:44:12 +0000 (0:00:00.351) 0:00:42.139 ***** 2026-02-05 00:44:16.959068 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.959084 | orchestrator | 2026-02-05 00:44:16.959100 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-05 00:44:16.959117 | orchestrator | Thursday 05 February 2026 00:44:13 +0000 (0:00:00.154) 0:00:42.294 ***** 2026-02-05 00:44:16.959134 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.959152 | orchestrator | 2026-02-05 00:44:16.959168 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-05 00:44:16.959186 | orchestrator | Thursday 05 February 2026 00:44:13 +0000 (0:00:00.116) 0:00:42.410 ***** 2026-02-05 00:44:16.959203 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.959214 | orchestrator | 2026-02-05 00:44:16.959222 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-05 00:44:16.959230 | orchestrator | Thursday 05 February 2026 00:44:13 +0000 (0:00:00.117) 0:00:42.528 ***** 2026-02-05 00:44:16.959238 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.959249 | orchestrator | 2026-02-05 00:44:16.959262 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-05 00:44:16.959273 | orchestrator | Thursday 05 February 2026 00:44:13 +0000 (0:00:00.128) 0:00:42.657 ***** 2026-02-05 00:44:16.959282 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.959295 | orchestrator | 2026-02-05 00:44:16.959307 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-05 00:44:16.959321 | orchestrator | Thursday 05 February 2026 00:44:13 +0000 (0:00:00.130) 0:00:42.787 ***** 2026-02-05 00:44:16.959334 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.959348 | orchestrator | 2026-02-05 00:44:16.959362 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-05 00:44:16.959376 | orchestrator | Thursday 05 February 2026 00:44:13 +0000 (0:00:00.113) 0:00:42.901 ***** 2026-02-05 00:44:16.959411 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.959426 | orchestrator | 2026-02-05 00:44:16.959439 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-05 00:44:16.959475 | orchestrator | Thursday 05 February 2026 00:44:13 +0000 (0:00:00.137) 0:00:43.038 ***** 2026-02-05 00:44:16.959489 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.959504 | orchestrator | 2026-02-05 00:44:16.959518 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-05 00:44:16.959532 | orchestrator | Thursday 05 February 2026 00:44:13 +0000 (0:00:00.131) 0:00:43.170 ***** 2026-02-05 00:44:16.959546 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.959561 | orchestrator | 2026-02-05 00:44:16.959574 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-05 00:44:16.959588 | orchestrator | Thursday 05 February 2026 00:44:14 +0000 (0:00:00.117) 0:00:43.288 ***** 2026-02-05 00:44:16.959601 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:16.959630 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:16.959640 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.959649 | orchestrator | 2026-02-05 00:44:16.959659 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-05 00:44:16.959667 | orchestrator | Thursday 05 February 2026 00:44:14 +0000 (0:00:00.139) 0:00:43.428 ***** 2026-02-05 00:44:16.959677 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:16.959687 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:16.959701 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.959715 | orchestrator | 2026-02-05 00:44:16.959729 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-05 00:44:16.959743 | orchestrator | Thursday 05 February 2026 00:44:14 +0000 (0:00:00.134) 0:00:43.562 ***** 2026-02-05 00:44:16.959757 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:16.959770 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:16.959783 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.959796 | orchestrator | 2026-02-05 00:44:16.959810 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-05 00:44:16.959823 | orchestrator | Thursday 05 February 2026 00:44:14 +0000 (0:00:00.277) 0:00:43.839 ***** 2026-02-05 00:44:16.959837 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:16.959850 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:16.959864 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.959878 | orchestrator | 2026-02-05 00:44:16.959911 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-05 00:44:16.959925 | orchestrator | Thursday 05 February 2026 00:44:14 +0000 (0:00:00.144) 0:00:43.984 ***** 2026-02-05 00:44:16.959938 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:16.959952 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:16.959966 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.959980 | orchestrator | 2026-02-05 00:44:16.959994 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-05 00:44:16.960003 | orchestrator | Thursday 05 February 2026 00:44:14 +0000 (0:00:00.139) 0:00:44.124 ***** 2026-02-05 00:44:16.960011 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:16.960019 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:16.960026 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.960034 | orchestrator | 2026-02-05 00:44:16.960042 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-05 00:44:16.960050 | orchestrator | Thursday 05 February 2026 00:44:15 +0000 (0:00:00.145) 0:00:44.269 ***** 2026-02-05 00:44:16.960058 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:16.960073 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:16.960081 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.960089 | orchestrator | 2026-02-05 00:44:16.960097 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-05 00:44:16.960105 | orchestrator | Thursday 05 February 2026 00:44:15 +0000 (0:00:00.159) 0:00:44.428 ***** 2026-02-05 00:44:16.960113 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:16.960121 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:16.960129 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.960137 | orchestrator | 2026-02-05 00:44:16.960145 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-05 00:44:16.960152 | orchestrator | Thursday 05 February 2026 00:44:15 +0000 (0:00:00.124) 0:00:44.553 ***** 2026-02-05 00:44:16.960161 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:44:16.960169 | orchestrator | 2026-02-05 00:44:16.960176 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-05 00:44:16.960184 | orchestrator | Thursday 05 February 2026 00:44:15 +0000 (0:00:00.521) 0:00:45.074 ***** 2026-02-05 00:44:16.960192 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:44:16.960200 | orchestrator | 2026-02-05 00:44:16.960208 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-05 00:44:16.960216 | orchestrator | Thursday 05 February 2026 00:44:16 +0000 (0:00:00.554) 0:00:45.628 ***** 2026-02-05 00:44:16.960224 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:44:16.960236 | orchestrator | 2026-02-05 00:44:16.960250 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-05 00:44:16.960263 | orchestrator | Thursday 05 February 2026 00:44:16 +0000 (0:00:00.139) 0:00:45.768 ***** 2026-02-05 00:44:16.960276 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'vg_name': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'}) 2026-02-05 00:44:16.960291 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'vg_name': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'}) 2026-02-05 00:44:16.960305 | orchestrator | 2026-02-05 00:44:16.960319 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-05 00:44:16.960331 | orchestrator | Thursday 05 February 2026 00:44:16 +0000 (0:00:00.150) 0:00:45.919 ***** 2026-02-05 00:44:16.960344 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:16.960352 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:16.960360 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:16.960368 | orchestrator | 2026-02-05 00:44:16.960375 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-05 00:44:16.960383 | orchestrator | Thursday 05 February 2026 00:44:16 +0000 (0:00:00.153) 0:00:46.072 ***** 2026-02-05 00:44:16.960391 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:16.960405 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:22.333858 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:22.333955 | orchestrator | 2026-02-05 00:44:22.333988 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-05 00:44:22.334000 | orchestrator | Thursday 05 February 2026 00:44:17 +0000 (0:00:00.133) 0:00:46.206 ***** 2026-02-05 00:44:22.334011 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'})  2026-02-05 00:44:22.334140 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'})  2026-02-05 00:44:22.334152 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:44:22.334161 | orchestrator | 2026-02-05 00:44:22.334172 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-05 00:44:22.334181 | orchestrator | Thursday 05 February 2026 00:44:17 +0000 (0:00:00.144) 0:00:46.351 ***** 2026-02-05 00:44:22.334191 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 00:44:22.334200 | orchestrator |  "lvm_report": { 2026-02-05 00:44:22.334211 | orchestrator |  "lv": [ 2026-02-05 00:44:22.334221 | orchestrator |  { 2026-02-05 00:44:22.334231 | orchestrator |  "lv_name": "osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c", 2026-02-05 00:44:22.334241 | orchestrator |  "vg_name": "ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c" 2026-02-05 00:44:22.334251 | orchestrator |  }, 2026-02-05 00:44:22.334261 | orchestrator |  { 2026-02-05 00:44:22.334278 | orchestrator |  "lv_name": "osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937", 2026-02-05 00:44:22.334293 | orchestrator |  "vg_name": "ceph-a29ad6cb-22eb-5988-a460-3c83981a9937" 2026-02-05 00:44:22.334309 | orchestrator |  } 2026-02-05 00:44:22.334326 | orchestrator |  ], 2026-02-05 00:44:22.334342 | orchestrator |  "pv": [ 2026-02-05 00:44:22.334357 | orchestrator |  { 2026-02-05 00:44:22.334375 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-05 00:44:22.334402 | orchestrator |  "vg_name": "ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c" 2026-02-05 00:44:22.334420 | orchestrator |  }, 2026-02-05 00:44:22.334437 | orchestrator |  { 2026-02-05 00:44:22.334478 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-05 00:44:22.334497 | orchestrator |  "vg_name": "ceph-a29ad6cb-22eb-5988-a460-3c83981a9937" 2026-02-05 00:44:22.334514 | orchestrator |  } 2026-02-05 00:44:22.334530 | orchestrator |  ] 2026-02-05 00:44:22.334546 | orchestrator |  } 2026-02-05 00:44:22.334558 | orchestrator | } 2026-02-05 00:44:22.334569 | orchestrator | 2026-02-05 00:44:22.334579 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-05 00:44:22.334589 | orchestrator | 2026-02-05 00:44:22.334599 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-05 00:44:22.334609 | orchestrator | Thursday 05 February 2026 00:44:17 +0000 (0:00:00.389) 0:00:46.741 ***** 2026-02-05 00:44:22.334619 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-05 00:44:22.334629 | orchestrator | 2026-02-05 00:44:22.334639 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-05 00:44:22.334648 | orchestrator | Thursday 05 February 2026 00:44:17 +0000 (0:00:00.242) 0:00:46.983 ***** 2026-02-05 00:44:22.334658 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:44:22.334668 | orchestrator | 2026-02-05 00:44:22.334677 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:22.334687 | orchestrator | Thursday 05 February 2026 00:44:18 +0000 (0:00:00.219) 0:00:47.202 ***** 2026-02-05 00:44:22.334696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-05 00:44:22.334706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-05 00:44:22.334720 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-05 00:44:22.334736 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-05 00:44:22.334767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-05 00:44:22.334784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-05 00:44:22.334800 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-05 00:44:22.334818 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-05 00:44:22.334834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-05 00:44:22.334855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-05 00:44:22.334872 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-05 00:44:22.334889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-05 00:44:22.334905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-05 00:44:22.334922 | orchestrator | 2026-02-05 00:44:22.334938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:22.334955 | orchestrator | Thursday 05 February 2026 00:44:18 +0000 (0:00:00.367) 0:00:47.569 ***** 2026-02-05 00:44:22.334966 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:22.334976 | orchestrator | 2026-02-05 00:44:22.334985 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:22.334994 | orchestrator | Thursday 05 February 2026 00:44:18 +0000 (0:00:00.179) 0:00:47.749 ***** 2026-02-05 00:44:22.335004 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:22.335025 | orchestrator | 2026-02-05 00:44:22.335035 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:22.335066 | orchestrator | Thursday 05 February 2026 00:44:18 +0000 (0:00:00.173) 0:00:47.923 ***** 2026-02-05 00:44:22.335076 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:22.335086 | orchestrator | 2026-02-05 00:44:22.335095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:22.335105 | orchestrator | Thursday 05 February 2026 00:44:18 +0000 (0:00:00.181) 0:00:48.105 ***** 2026-02-05 00:44:22.335114 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:22.335123 | orchestrator | 2026-02-05 00:44:22.335133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:22.335142 | orchestrator | Thursday 05 February 2026 00:44:19 +0000 (0:00:00.169) 0:00:48.274 ***** 2026-02-05 00:44:22.335152 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:22.335161 | orchestrator | 2026-02-05 00:44:22.335171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:22.335180 | orchestrator | Thursday 05 February 2026 00:44:19 +0000 (0:00:00.458) 0:00:48.732 ***** 2026-02-05 00:44:22.335190 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:22.335199 | orchestrator | 2026-02-05 00:44:22.335208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:22.335219 | orchestrator | Thursday 05 February 2026 00:44:19 +0000 (0:00:00.198) 0:00:48.931 ***** 2026-02-05 00:44:22.335236 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:22.335253 | orchestrator | 2026-02-05 00:44:22.335269 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:22.335286 | orchestrator | Thursday 05 February 2026 00:44:19 +0000 (0:00:00.173) 0:00:49.104 ***** 2026-02-05 00:44:22.335303 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:22.335321 | orchestrator | 2026-02-05 00:44:22.335339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:22.335355 | orchestrator | Thursday 05 February 2026 00:44:20 +0000 (0:00:00.182) 0:00:49.286 ***** 2026-02-05 00:44:22.335373 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11) 2026-02-05 00:44:22.335398 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11) 2026-02-05 00:44:22.335428 | orchestrator | 2026-02-05 00:44:22.335467 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:22.335483 | orchestrator | Thursday 05 February 2026 00:44:20 +0000 (0:00:00.404) 0:00:49.691 ***** 2026-02-05 00:44:22.335500 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5c16bdfb-9776-4282-a52f-d0746538d24f) 2026-02-05 00:44:22.335517 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5c16bdfb-9776-4282-a52f-d0746538d24f) 2026-02-05 00:44:22.335532 | orchestrator | 2026-02-05 00:44:22.335549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:22.335559 | orchestrator | Thursday 05 February 2026 00:44:20 +0000 (0:00:00.388) 0:00:50.080 ***** 2026-02-05 00:44:22.335569 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_66450c46-76da-4fbd-b0f3-00f2a07ceccd) 2026-02-05 00:44:22.335578 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_66450c46-76da-4fbd-b0f3-00f2a07ceccd) 2026-02-05 00:44:22.335588 | orchestrator | 2026-02-05 00:44:22.335597 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:22.335607 | orchestrator | Thursday 05 February 2026 00:44:21 +0000 (0:00:00.383) 0:00:50.463 ***** 2026-02-05 00:44:22.335616 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_13446d2e-9611-4725-bf6d-ec20aba1d1c7) 2026-02-05 00:44:22.335626 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_13446d2e-9611-4725-bf6d-ec20aba1d1c7) 2026-02-05 00:44:22.335635 | orchestrator | 2026-02-05 00:44:22.335645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-05 00:44:22.335654 | orchestrator | Thursday 05 February 2026 00:44:21 +0000 (0:00:00.382) 0:00:50.846 ***** 2026-02-05 00:44:22.335664 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-05 00:44:22.335673 | orchestrator | 2026-02-05 00:44:22.335682 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:22.335692 | orchestrator | Thursday 05 February 2026 00:44:21 +0000 (0:00:00.304) 0:00:51.150 ***** 2026-02-05 00:44:22.335701 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-05 00:44:22.335711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-05 00:44:22.335720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-05 00:44:22.335730 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-05 00:44:22.335739 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-05 00:44:22.335749 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-05 00:44:22.335758 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-05 00:44:22.335767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-05 00:44:22.335777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-05 00:44:22.335786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-05 00:44:22.335796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-05 00:44:22.335813 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-05 00:44:30.884002 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-05 00:44:30.884183 | orchestrator | 2026-02-05 00:44:30.884197 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:30.884206 | orchestrator | Thursday 05 February 2026 00:44:22 +0000 (0:00:00.470) 0:00:51.620 ***** 2026-02-05 00:44:30.884236 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.884246 | orchestrator | 2026-02-05 00:44:30.884254 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:30.884263 | orchestrator | Thursday 05 February 2026 00:44:22 +0000 (0:00:00.199) 0:00:51.820 ***** 2026-02-05 00:44:30.884271 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.884279 | orchestrator | 2026-02-05 00:44:30.884287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:30.884295 | orchestrator | Thursday 05 February 2026 00:44:23 +0000 (0:00:00.511) 0:00:52.332 ***** 2026-02-05 00:44:30.884303 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.884310 | orchestrator | 2026-02-05 00:44:30.884318 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:30.884326 | orchestrator | Thursday 05 February 2026 00:44:23 +0000 (0:00:00.217) 0:00:52.549 ***** 2026-02-05 00:44:30.884334 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.884342 | orchestrator | 2026-02-05 00:44:30.884350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:30.884358 | orchestrator | Thursday 05 February 2026 00:44:23 +0000 (0:00:00.191) 0:00:52.740 ***** 2026-02-05 00:44:30.884366 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.884373 | orchestrator | 2026-02-05 00:44:30.884381 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:30.884389 | orchestrator | Thursday 05 February 2026 00:44:23 +0000 (0:00:00.197) 0:00:52.938 ***** 2026-02-05 00:44:30.884396 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.884404 | orchestrator | 2026-02-05 00:44:30.884424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:30.884432 | orchestrator | Thursday 05 February 2026 00:44:23 +0000 (0:00:00.171) 0:00:53.109 ***** 2026-02-05 00:44:30.884489 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.884506 | orchestrator | 2026-02-05 00:44:30.884515 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:30.884523 | orchestrator | Thursday 05 February 2026 00:44:24 +0000 (0:00:00.186) 0:00:53.296 ***** 2026-02-05 00:44:30.884530 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.884538 | orchestrator | 2026-02-05 00:44:30.884546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:30.884556 | orchestrator | Thursday 05 February 2026 00:44:24 +0000 (0:00:00.174) 0:00:53.470 ***** 2026-02-05 00:44:30.884566 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-05 00:44:30.884580 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-05 00:44:30.884594 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-05 00:44:30.884606 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-05 00:44:30.884619 | orchestrator | 2026-02-05 00:44:30.884633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:30.884647 | orchestrator | Thursday 05 February 2026 00:44:24 +0000 (0:00:00.597) 0:00:54.068 ***** 2026-02-05 00:44:30.884660 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.884673 | orchestrator | 2026-02-05 00:44:30.884687 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:30.884699 | orchestrator | Thursday 05 February 2026 00:44:25 +0000 (0:00:00.209) 0:00:54.278 ***** 2026-02-05 00:44:30.884713 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.884726 | orchestrator | 2026-02-05 00:44:30.884739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:30.884753 | orchestrator | Thursday 05 February 2026 00:44:25 +0000 (0:00:00.189) 0:00:54.467 ***** 2026-02-05 00:44:30.884767 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.884780 | orchestrator | 2026-02-05 00:44:30.884793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-05 00:44:30.884807 | orchestrator | Thursday 05 February 2026 00:44:25 +0000 (0:00:00.173) 0:00:54.641 ***** 2026-02-05 00:44:30.884833 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.884847 | orchestrator | 2026-02-05 00:44:30.884858 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-05 00:44:30.884867 | orchestrator | Thursday 05 February 2026 00:44:25 +0000 (0:00:00.191) 0:00:54.832 ***** 2026-02-05 00:44:30.884876 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.884885 | orchestrator | 2026-02-05 00:44:30.884895 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-05 00:44:30.884904 | orchestrator | Thursday 05 February 2026 00:44:25 +0000 (0:00:00.252) 0:00:55.085 ***** 2026-02-05 00:44:30.884914 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '44714651-8fa8-5efe-842f-d8a32b49e267'}}) 2026-02-05 00:44:30.884924 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'}}) 2026-02-05 00:44:30.884932 | orchestrator | 2026-02-05 00:44:30.884940 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-05 00:44:30.884948 | orchestrator | Thursday 05 February 2026 00:44:26 +0000 (0:00:00.197) 0:00:55.282 ***** 2026-02-05 00:44:30.884957 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'}) 2026-02-05 00:44:30.884967 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'}) 2026-02-05 00:44:30.884975 | orchestrator | 2026-02-05 00:44:30.884983 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-05 00:44:30.885007 | orchestrator | Thursday 05 February 2026 00:44:28 +0000 (0:00:01.902) 0:00:57.185 ***** 2026-02-05 00:44:30.885017 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:30.885027 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:30.885035 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.885042 | orchestrator | 2026-02-05 00:44:30.885050 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-05 00:44:30.885058 | orchestrator | Thursday 05 February 2026 00:44:28 +0000 (0:00:00.154) 0:00:57.339 ***** 2026-02-05 00:44:30.885066 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'}) 2026-02-05 00:44:30.885074 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'}) 2026-02-05 00:44:30.885082 | orchestrator | 2026-02-05 00:44:30.885090 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-05 00:44:30.885098 | orchestrator | Thursday 05 February 2026 00:44:29 +0000 (0:00:01.326) 0:00:58.665 ***** 2026-02-05 00:44:30.885106 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:30.885114 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:30.885122 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.885130 | orchestrator | 2026-02-05 00:44:30.885138 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-05 00:44:30.885146 | orchestrator | Thursday 05 February 2026 00:44:29 +0000 (0:00:00.147) 0:00:58.813 ***** 2026-02-05 00:44:30.885153 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.885161 | orchestrator | 2026-02-05 00:44:30.885169 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-05 00:44:30.885177 | orchestrator | Thursday 05 February 2026 00:44:29 +0000 (0:00:00.138) 0:00:58.951 ***** 2026-02-05 00:44:30.885207 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:30.885224 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:30.885232 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.885240 | orchestrator | 2026-02-05 00:44:30.885248 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-05 00:44:30.885256 | orchestrator | Thursday 05 February 2026 00:44:29 +0000 (0:00:00.150) 0:00:59.102 ***** 2026-02-05 00:44:30.885264 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.885276 | orchestrator | 2026-02-05 00:44:30.885290 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-05 00:44:30.885303 | orchestrator | Thursday 05 February 2026 00:44:30 +0000 (0:00:00.129) 0:00:59.231 ***** 2026-02-05 00:44:30.885316 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:30.885328 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:30.885342 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.885355 | orchestrator | 2026-02-05 00:44:30.885369 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-05 00:44:30.885395 | orchestrator | Thursday 05 February 2026 00:44:30 +0000 (0:00:00.156) 0:00:59.388 ***** 2026-02-05 00:44:30.885409 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.885424 | orchestrator | 2026-02-05 00:44:30.885458 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-05 00:44:30.885471 | orchestrator | Thursday 05 February 2026 00:44:30 +0000 (0:00:00.142) 0:00:59.530 ***** 2026-02-05 00:44:30.885480 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:30.885488 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:30.885496 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:30.885503 | orchestrator | 2026-02-05 00:44:30.885511 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-05 00:44:30.885519 | orchestrator | Thursday 05 February 2026 00:44:30 +0000 (0:00:00.156) 0:00:59.687 ***** 2026-02-05 00:44:30.885527 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:44:30.885535 | orchestrator | 2026-02-05 00:44:30.885543 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-05 00:44:30.885550 | orchestrator | Thursday 05 February 2026 00:44:30 +0000 (0:00:00.310) 0:00:59.997 ***** 2026-02-05 00:44:30.885565 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:36.545233 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:36.545347 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.545362 | orchestrator | 2026-02-05 00:44:36.545373 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-05 00:44:36.545383 | orchestrator | Thursday 05 February 2026 00:44:30 +0000 (0:00:00.159) 0:01:00.157 ***** 2026-02-05 00:44:36.545393 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:36.545435 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:36.545518 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.545527 | orchestrator | 2026-02-05 00:44:36.545536 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-05 00:44:36.545544 | orchestrator | Thursday 05 February 2026 00:44:31 +0000 (0:00:00.151) 0:01:00.309 ***** 2026-02-05 00:44:36.545552 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:36.545561 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:36.545569 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.545578 | orchestrator | 2026-02-05 00:44:36.545586 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-05 00:44:36.545608 | orchestrator | Thursday 05 February 2026 00:44:31 +0000 (0:00:00.150) 0:01:00.460 ***** 2026-02-05 00:44:36.545617 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.545626 | orchestrator | 2026-02-05 00:44:36.545684 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-05 00:44:36.545693 | orchestrator | Thursday 05 February 2026 00:44:31 +0000 (0:00:00.134) 0:01:00.594 ***** 2026-02-05 00:44:36.545702 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.545733 | orchestrator | 2026-02-05 00:44:36.545745 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-05 00:44:36.545755 | orchestrator | Thursday 05 February 2026 00:44:31 +0000 (0:00:00.131) 0:01:00.726 ***** 2026-02-05 00:44:36.545765 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.545776 | orchestrator | 2026-02-05 00:44:36.545787 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-05 00:44:36.545797 | orchestrator | Thursday 05 February 2026 00:44:31 +0000 (0:00:00.130) 0:01:00.856 ***** 2026-02-05 00:44:36.545807 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 00:44:36.545818 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-05 00:44:36.545829 | orchestrator | } 2026-02-05 00:44:36.545840 | orchestrator | 2026-02-05 00:44:36.545851 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-05 00:44:36.545861 | orchestrator | Thursday 05 February 2026 00:44:31 +0000 (0:00:00.140) 0:01:00.996 ***** 2026-02-05 00:44:36.545872 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 00:44:36.545883 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-05 00:44:36.545893 | orchestrator | } 2026-02-05 00:44:36.545904 | orchestrator | 2026-02-05 00:44:36.545914 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-05 00:44:36.545925 | orchestrator | Thursday 05 February 2026 00:44:31 +0000 (0:00:00.134) 0:01:01.131 ***** 2026-02-05 00:44:36.545936 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 00:44:36.545946 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-05 00:44:36.545957 | orchestrator | } 2026-02-05 00:44:36.545966 | orchestrator | 2026-02-05 00:44:36.545975 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-05 00:44:36.545984 | orchestrator | Thursday 05 February 2026 00:44:32 +0000 (0:00:00.153) 0:01:01.285 ***** 2026-02-05 00:44:36.545993 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:44:36.546002 | orchestrator | 2026-02-05 00:44:36.546010 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-05 00:44:36.546067 | orchestrator | Thursday 05 February 2026 00:44:32 +0000 (0:00:00.487) 0:01:01.772 ***** 2026-02-05 00:44:36.546076 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:44:36.546085 | orchestrator | 2026-02-05 00:44:36.546094 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-05 00:44:36.546103 | orchestrator | Thursday 05 February 2026 00:44:33 +0000 (0:00:00.502) 0:01:02.275 ***** 2026-02-05 00:44:36.546111 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:44:36.546130 | orchestrator | 2026-02-05 00:44:36.546138 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-05 00:44:36.546147 | orchestrator | Thursday 05 February 2026 00:44:33 +0000 (0:00:00.692) 0:01:02.968 ***** 2026-02-05 00:44:36.546156 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:44:36.546165 | orchestrator | 2026-02-05 00:44:36.546174 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-05 00:44:36.546182 | orchestrator | Thursday 05 February 2026 00:44:33 +0000 (0:00:00.147) 0:01:03.115 ***** 2026-02-05 00:44:36.546191 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.546200 | orchestrator | 2026-02-05 00:44:36.546209 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-05 00:44:36.546219 | orchestrator | Thursday 05 February 2026 00:44:34 +0000 (0:00:00.097) 0:01:03.213 ***** 2026-02-05 00:44:36.546228 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.546236 | orchestrator | 2026-02-05 00:44:36.546245 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-05 00:44:36.546254 | orchestrator | Thursday 05 February 2026 00:44:34 +0000 (0:00:00.104) 0:01:03.317 ***** 2026-02-05 00:44:36.546263 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 00:44:36.546272 | orchestrator |  "vgs_report": { 2026-02-05 00:44:36.546282 | orchestrator |  "vg": [] 2026-02-05 00:44:36.546310 | orchestrator |  } 2026-02-05 00:44:36.546320 | orchestrator | } 2026-02-05 00:44:36.546329 | orchestrator | 2026-02-05 00:44:36.546338 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-05 00:44:36.546347 | orchestrator | Thursday 05 February 2026 00:44:34 +0000 (0:00:00.140) 0:01:03.458 ***** 2026-02-05 00:44:36.546356 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.546364 | orchestrator | 2026-02-05 00:44:36.546376 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-05 00:44:36.546391 | orchestrator | Thursday 05 February 2026 00:44:34 +0000 (0:00:00.136) 0:01:03.594 ***** 2026-02-05 00:44:36.546404 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.546429 | orchestrator | 2026-02-05 00:44:36.546464 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-05 00:44:36.546479 | orchestrator | Thursday 05 February 2026 00:44:34 +0000 (0:00:00.146) 0:01:03.741 ***** 2026-02-05 00:44:36.546493 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.546507 | orchestrator | 2026-02-05 00:44:36.546519 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-05 00:44:36.546532 | orchestrator | Thursday 05 February 2026 00:44:34 +0000 (0:00:00.132) 0:01:03.873 ***** 2026-02-05 00:44:36.546546 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.546559 | orchestrator | 2026-02-05 00:44:36.546573 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-05 00:44:36.546585 | orchestrator | Thursday 05 February 2026 00:44:34 +0000 (0:00:00.138) 0:01:04.012 ***** 2026-02-05 00:44:36.546598 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.546611 | orchestrator | 2026-02-05 00:44:36.546624 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-05 00:44:36.546636 | orchestrator | Thursday 05 February 2026 00:44:34 +0000 (0:00:00.129) 0:01:04.141 ***** 2026-02-05 00:44:36.546649 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.546662 | orchestrator | 2026-02-05 00:44:36.546674 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-05 00:44:36.546695 | orchestrator | Thursday 05 February 2026 00:44:35 +0000 (0:00:00.125) 0:01:04.267 ***** 2026-02-05 00:44:36.546709 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.546722 | orchestrator | 2026-02-05 00:44:36.546736 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-05 00:44:36.546749 | orchestrator | Thursday 05 February 2026 00:44:35 +0000 (0:00:00.130) 0:01:04.397 ***** 2026-02-05 00:44:36.546763 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.546776 | orchestrator | 2026-02-05 00:44:36.546790 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-05 00:44:36.546816 | orchestrator | Thursday 05 February 2026 00:44:35 +0000 (0:00:00.242) 0:01:04.639 ***** 2026-02-05 00:44:36.546831 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.546846 | orchestrator | 2026-02-05 00:44:36.546862 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-05 00:44:36.546878 | orchestrator | Thursday 05 February 2026 00:44:35 +0000 (0:00:00.136) 0:01:04.775 ***** 2026-02-05 00:44:36.546893 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.546907 | orchestrator | 2026-02-05 00:44:36.546920 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-05 00:44:36.546934 | orchestrator | Thursday 05 February 2026 00:44:35 +0000 (0:00:00.123) 0:01:04.899 ***** 2026-02-05 00:44:36.546947 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.546961 | orchestrator | 2026-02-05 00:44:36.546975 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-05 00:44:36.546989 | orchestrator | Thursday 05 February 2026 00:44:35 +0000 (0:00:00.122) 0:01:05.021 ***** 2026-02-05 00:44:36.547004 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.547017 | orchestrator | 2026-02-05 00:44:36.547031 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-05 00:44:36.547045 | orchestrator | Thursday 05 February 2026 00:44:35 +0000 (0:00:00.121) 0:01:05.143 ***** 2026-02-05 00:44:36.547059 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.547073 | orchestrator | 2026-02-05 00:44:36.547088 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-05 00:44:36.547107 | orchestrator | Thursday 05 February 2026 00:44:36 +0000 (0:00:00.126) 0:01:05.269 ***** 2026-02-05 00:44:36.547117 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.547125 | orchestrator | 2026-02-05 00:44:36.547134 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-05 00:44:36.547143 | orchestrator | Thursday 05 February 2026 00:44:36 +0000 (0:00:00.116) 0:01:05.386 ***** 2026-02-05 00:44:36.547152 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:36.547162 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:36.547190 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.547199 | orchestrator | 2026-02-05 00:44:36.547208 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-05 00:44:36.547216 | orchestrator | Thursday 05 February 2026 00:44:36 +0000 (0:00:00.137) 0:01:05.523 ***** 2026-02-05 00:44:36.547225 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:36.547234 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:36.547243 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:36.547251 | orchestrator | 2026-02-05 00:44:36.547260 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-05 00:44:36.547269 | orchestrator | Thursday 05 February 2026 00:44:36 +0000 (0:00:00.143) 0:01:05.666 ***** 2026-02-05 00:44:36.547290 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:39.374155 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:39.374261 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:39.374280 | orchestrator | 2026-02-05 00:44:39.374295 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-05 00:44:39.374310 | orchestrator | Thursday 05 February 2026 00:44:36 +0000 (0:00:00.134) 0:01:05.801 ***** 2026-02-05 00:44:39.374380 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:39.374397 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:39.374411 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:39.374425 | orchestrator | 2026-02-05 00:44:39.374460 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-05 00:44:39.374474 | orchestrator | Thursday 05 February 2026 00:44:36 +0000 (0:00:00.127) 0:01:05.928 ***** 2026-02-05 00:44:39.374488 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:39.374518 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:39.374531 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:39.374544 | orchestrator | 2026-02-05 00:44:39.374557 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-05 00:44:39.374568 | orchestrator | Thursday 05 February 2026 00:44:36 +0000 (0:00:00.140) 0:01:06.069 ***** 2026-02-05 00:44:39.374581 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:39.374593 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:39.374606 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:39.374622 | orchestrator | 2026-02-05 00:44:39.374636 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-05 00:44:39.374649 | orchestrator | Thursday 05 February 2026 00:44:37 +0000 (0:00:00.294) 0:01:06.363 ***** 2026-02-05 00:44:39.374662 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:39.374676 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:39.374690 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:39.374703 | orchestrator | 2026-02-05 00:44:39.374717 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-05 00:44:39.374730 | orchestrator | Thursday 05 February 2026 00:44:37 +0000 (0:00:00.141) 0:01:06.504 ***** 2026-02-05 00:44:39.374743 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:39.374757 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:39.374770 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:39.374784 | orchestrator | 2026-02-05 00:44:39.374797 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-05 00:44:39.374811 | orchestrator | Thursday 05 February 2026 00:44:37 +0000 (0:00:00.129) 0:01:06.634 ***** 2026-02-05 00:44:39.374826 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:44:39.374841 | orchestrator | 2026-02-05 00:44:39.374854 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-05 00:44:39.374867 | orchestrator | Thursday 05 February 2026 00:44:37 +0000 (0:00:00.468) 0:01:07.103 ***** 2026-02-05 00:44:39.374880 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:44:39.374892 | orchestrator | 2026-02-05 00:44:39.374905 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-05 00:44:39.374932 | orchestrator | Thursday 05 February 2026 00:44:38 +0000 (0:00:00.513) 0:01:07.616 ***** 2026-02-05 00:44:39.374946 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:44:39.374959 | orchestrator | 2026-02-05 00:44:39.374971 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-05 00:44:39.374985 | orchestrator | Thursday 05 February 2026 00:44:38 +0000 (0:00:00.147) 0:01:07.764 ***** 2026-02-05 00:44:39.374998 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'vg_name': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'}) 2026-02-05 00:44:39.375013 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'vg_name': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'}) 2026-02-05 00:44:39.375025 | orchestrator | 2026-02-05 00:44:39.375037 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-05 00:44:39.375049 | orchestrator | Thursday 05 February 2026 00:44:38 +0000 (0:00:00.170) 0:01:07.935 ***** 2026-02-05 00:44:39.375085 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:39.375099 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:39.375112 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:39.375125 | orchestrator | 2026-02-05 00:44:39.375139 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-05 00:44:39.375152 | orchestrator | Thursday 05 February 2026 00:44:38 +0000 (0:00:00.165) 0:01:08.100 ***** 2026-02-05 00:44:39.375165 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:39.375178 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:39.375191 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:39.375203 | orchestrator | 2026-02-05 00:44:39.375216 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-05 00:44:39.375229 | orchestrator | Thursday 05 February 2026 00:44:39 +0000 (0:00:00.162) 0:01:08.263 ***** 2026-02-05 00:44:39.375242 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'})  2026-02-05 00:44:39.375256 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'})  2026-02-05 00:44:39.375270 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:44:39.375282 | orchestrator | 2026-02-05 00:44:39.375294 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-05 00:44:39.375307 | orchestrator | Thursday 05 February 2026 00:44:39 +0000 (0:00:00.146) 0:01:08.410 ***** 2026-02-05 00:44:39.375320 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 00:44:39.375333 | orchestrator |  "lvm_report": { 2026-02-05 00:44:39.375347 | orchestrator |  "lv": [ 2026-02-05 00:44:39.375361 | orchestrator |  { 2026-02-05 00:44:39.375374 | orchestrator |  "lv_name": "osd-block-44714651-8fa8-5efe-842f-d8a32b49e267", 2026-02-05 00:44:39.375389 | orchestrator |  "vg_name": "ceph-44714651-8fa8-5efe-842f-d8a32b49e267" 2026-02-05 00:44:39.375403 | orchestrator |  }, 2026-02-05 00:44:39.375416 | orchestrator |  { 2026-02-05 00:44:39.375429 | orchestrator |  "lv_name": "osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685", 2026-02-05 00:44:39.375469 | orchestrator |  "vg_name": "ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685" 2026-02-05 00:44:39.375483 | orchestrator |  } 2026-02-05 00:44:39.375495 | orchestrator |  ], 2026-02-05 00:44:39.375508 | orchestrator |  "pv": [ 2026-02-05 00:44:39.375537 | orchestrator |  { 2026-02-05 00:44:39.375553 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-05 00:44:39.375567 | orchestrator |  "vg_name": "ceph-44714651-8fa8-5efe-842f-d8a32b49e267" 2026-02-05 00:44:39.375579 | orchestrator |  }, 2026-02-05 00:44:39.375592 | orchestrator |  { 2026-02-05 00:44:39.375605 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-05 00:44:39.375619 | orchestrator |  "vg_name": "ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685" 2026-02-05 00:44:39.375632 | orchestrator |  } 2026-02-05 00:44:39.375646 | orchestrator |  ] 2026-02-05 00:44:39.375659 | orchestrator |  } 2026-02-05 00:44:39.375674 | orchestrator | } 2026-02-05 00:44:39.375688 | orchestrator | 2026-02-05 00:44:39.375703 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:44:39.375717 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-05 00:44:39.375731 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-05 00:44:39.375745 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-05 00:44:39.375758 | orchestrator | 2026-02-05 00:44:39.375771 | orchestrator | 2026-02-05 00:44:39.375785 | orchestrator | 2026-02-05 00:44:39.375799 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:44:39.375812 | orchestrator | Thursday 05 February 2026 00:44:39 +0000 (0:00:00.135) 0:01:08.545 ***** 2026-02-05 00:44:39.375826 | orchestrator | =============================================================================== 2026-02-05 00:44:39.375839 | orchestrator | Create block VGs -------------------------------------------------------- 5.73s 2026-02-05 00:44:39.375853 | orchestrator | Create block LVs -------------------------------------------------------- 4.27s 2026-02-05 00:44:39.375866 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.03s 2026-02-05 00:44:39.375880 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.75s 2026-02-05 00:44:39.375906 | orchestrator | Add known partitions to the list of available block devices ------------- 1.64s 2026-02-05 00:44:39.375920 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2026-02-05 00:44:39.375931 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.57s 2026-02-05 00:44:39.375939 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.51s 2026-02-05 00:44:39.375960 | orchestrator | Add known links to the list of available block devices ------------------ 1.20s 2026-02-05 00:44:39.809921 | orchestrator | Add known partitions to the list of available block devices ------------- 1.17s 2026-02-05 00:44:39.810111 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2026-02-05 00:44:39.810129 | orchestrator | Print LVM report data --------------------------------------------------- 0.79s 2026-02-05 00:44:39.810142 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-02-05 00:44:39.810153 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.70s 2026-02-05 00:44:39.810164 | orchestrator | Get initial list of available block devices ----------------------------- 0.65s 2026-02-05 00:44:39.810175 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.64s 2026-02-05 00:44:39.810186 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.62s 2026-02-05 00:44:39.810197 | orchestrator | Calculate size needed for LVs on ceph_wal_devices ----------------------- 0.61s 2026-02-05 00:44:39.810208 | orchestrator | Print number of OSDs wanted per DB+WAL VG ------------------------------- 0.60s 2026-02-05 00:44:39.810219 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2026-02-05 00:44:52.233789 | orchestrator | 2026-02-05 00:44:52 | INFO  | Prepare task for execution of facts. 2026-02-05 00:44:52.299466 | orchestrator | 2026-02-05 00:44:52 | INFO  | Task ebcd99f3-6606-4a88-9935-73bf0484e9f8 (facts) was prepared for execution. 2026-02-05 00:44:52.299882 | orchestrator | 2026-02-05 00:44:52 | INFO  | It takes a moment until task ebcd99f3-6606-4a88-9935-73bf0484e9f8 (facts) has been started and output is visible here. 2026-02-05 00:45:04.083429 | orchestrator | 2026-02-05 00:45:04.083597 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-05 00:45:04.083609 | orchestrator | 2026-02-05 00:45:04.083616 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-05 00:45:04.083623 | orchestrator | Thursday 05 February 2026 00:44:56 +0000 (0:00:00.203) 0:00:00.203 ***** 2026-02-05 00:45:04.083629 | orchestrator | ok: [testbed-manager] 2026-02-05 00:45:04.083637 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:45:04.083643 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:45:04.083650 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:45:04.083656 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:45:04.083663 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:45:04.083669 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:45:04.083676 | orchestrator | 2026-02-05 00:45:04.083683 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-05 00:45:04.083690 | orchestrator | Thursday 05 February 2026 00:44:56 +0000 (0:00:00.855) 0:00:01.059 ***** 2026-02-05 00:45:04.083696 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:45:04.083704 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:45:04.083711 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:45:04.083717 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:45:04.083724 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:45:04.083730 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:45:04.083736 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:45:04.083743 | orchestrator | 2026-02-05 00:45:04.083749 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-05 00:45:04.083755 | orchestrator | 2026-02-05 00:45:04.083763 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-05 00:45:04.083770 | orchestrator | Thursday 05 February 2026 00:44:57 +0000 (0:00:00.983) 0:00:02.042 ***** 2026-02-05 00:45:04.083776 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:45:04.083783 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:45:04.083789 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:45:04.083795 | orchestrator | ok: [testbed-manager] 2026-02-05 00:45:04.083801 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:45:04.083807 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:45:04.083812 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:45:04.083819 | orchestrator | 2026-02-05 00:45:04.083825 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-05 00:45:04.083832 | orchestrator | 2026-02-05 00:45:04.083838 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-05 00:45:04.083845 | orchestrator | Thursday 05 February 2026 00:45:03 +0000 (0:00:05.432) 0:00:07.474 ***** 2026-02-05 00:45:04.083851 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:45:04.083857 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:45:04.083864 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:45:04.083872 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:45:04.083878 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:45:04.083884 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:45:04.083890 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:45:04.083896 | orchestrator | 2026-02-05 00:45:04.083902 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:45:04.083909 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:45:04.083917 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:45:04.083953 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:45:04.083961 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:45:04.083969 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:45:04.083977 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:45:04.083984 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:45:04.083991 | orchestrator | 2026-02-05 00:45:04.083998 | orchestrator | 2026-02-05 00:45:04.084006 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:45:04.084013 | orchestrator | Thursday 05 February 2026 00:45:03 +0000 (0:00:00.476) 0:00:07.950 ***** 2026-02-05 00:45:04.084021 | orchestrator | =============================================================================== 2026-02-05 00:45:04.084029 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.43s 2026-02-05 00:45:04.084036 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 0.98s 2026-02-05 00:45:04.084043 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.86s 2026-02-05 00:45:04.084050 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2026-02-05 00:45:16.123186 | orchestrator | 2026-02-05 00:45:16 | INFO  | Prepare task for execution of frr. 2026-02-05 00:45:16.185522 | orchestrator | 2026-02-05 00:45:16 | INFO  | Task 2f1ab885-27f9-4728-b87c-10f20f308091 (frr) was prepared for execution. 2026-02-05 00:45:16.185584 | orchestrator | 2026-02-05 00:45:16 | INFO  | It takes a moment until task 2f1ab885-27f9-4728-b87c-10f20f308091 (frr) has been started and output is visible here. 2026-02-05 00:45:39.796139 | orchestrator | 2026-02-05 00:45:39.796288 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-05 00:45:39.796308 | orchestrator | 2026-02-05 00:45:39.796320 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-05 00:45:39.796332 | orchestrator | Thursday 05 February 2026 00:45:19 +0000 (0:00:00.179) 0:00:00.179 ***** 2026-02-05 00:45:39.796343 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 00:45:39.796356 | orchestrator | 2026-02-05 00:45:39.796368 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-05 00:45:39.796379 | orchestrator | Thursday 05 February 2026 00:45:20 +0000 (0:00:00.177) 0:00:00.356 ***** 2026-02-05 00:45:39.796390 | orchestrator | changed: [testbed-manager] 2026-02-05 00:45:39.796402 | orchestrator | 2026-02-05 00:45:39.796413 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-05 00:45:39.796445 | orchestrator | Thursday 05 February 2026 00:45:21 +0000 (0:00:01.134) 0:00:01.491 ***** 2026-02-05 00:45:39.796457 | orchestrator | changed: [testbed-manager] 2026-02-05 00:45:39.796468 | orchestrator | 2026-02-05 00:45:39.796479 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-05 00:45:39.796489 | orchestrator | Thursday 05 February 2026 00:45:29 +0000 (0:00:08.560) 0:00:10.052 ***** 2026-02-05 00:45:39.796500 | orchestrator | ok: [testbed-manager] 2026-02-05 00:45:39.796512 | orchestrator | 2026-02-05 00:45:39.796523 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-05 00:45:39.796534 | orchestrator | Thursday 05 February 2026 00:45:30 +0000 (0:00:00.925) 0:00:10.977 ***** 2026-02-05 00:45:39.796545 | orchestrator | changed: [testbed-manager] 2026-02-05 00:45:39.796577 | orchestrator | 2026-02-05 00:45:39.796589 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-05 00:45:39.796600 | orchestrator | Thursday 05 February 2026 00:45:31 +0000 (0:00:00.969) 0:00:11.946 ***** 2026-02-05 00:45:39.796612 | orchestrator | ok: [testbed-manager] 2026-02-05 00:45:39.796623 | orchestrator | 2026-02-05 00:45:39.796645 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-05 00:45:39.796658 | orchestrator | Thursday 05 February 2026 00:45:32 +0000 (0:00:01.186) 0:00:13.133 ***** 2026-02-05 00:45:39.796669 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:45:39.796680 | orchestrator | 2026-02-05 00:45:39.796691 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-05 00:45:39.796704 | orchestrator | Thursday 05 February 2026 00:45:33 +0000 (0:00:00.134) 0:00:13.267 ***** 2026-02-05 00:45:39.796716 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:45:39.796729 | orchestrator | 2026-02-05 00:45:39.796742 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-05 00:45:39.796755 | orchestrator | Thursday 05 February 2026 00:45:33 +0000 (0:00:00.149) 0:00:13.417 ***** 2026-02-05 00:45:39.796768 | orchestrator | changed: [testbed-manager] 2026-02-05 00:45:39.796781 | orchestrator | 2026-02-05 00:45:39.796794 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-05 00:45:39.796806 | orchestrator | Thursday 05 February 2026 00:45:34 +0000 (0:00:01.009) 0:00:14.426 ***** 2026-02-05 00:45:39.796819 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-05 00:45:39.796833 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-05 00:45:39.796848 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-05 00:45:39.796860 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-05 00:45:39.796873 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-05 00:45:39.796886 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-05 00:45:39.796899 | orchestrator | 2026-02-05 00:45:39.796912 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-05 00:45:39.796926 | orchestrator | Thursday 05 February 2026 00:45:36 +0000 (0:00:02.234) 0:00:16.661 ***** 2026-02-05 00:45:39.796939 | orchestrator | ok: [testbed-manager] 2026-02-05 00:45:39.796952 | orchestrator | 2026-02-05 00:45:39.796963 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-05 00:45:39.796974 | orchestrator | Thursday 05 February 2026 00:45:38 +0000 (0:00:01.692) 0:00:18.354 ***** 2026-02-05 00:45:39.796985 | orchestrator | changed: [testbed-manager] 2026-02-05 00:45:39.796995 | orchestrator | 2026-02-05 00:45:39.797006 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:45:39.797017 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:45:39.797028 | orchestrator | 2026-02-05 00:45:39.797039 | orchestrator | 2026-02-05 00:45:39.797050 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:45:39.797061 | orchestrator | Thursday 05 February 2026 00:45:39 +0000 (0:00:01.403) 0:00:19.758 ***** 2026-02-05 00:45:39.797072 | orchestrator | =============================================================================== 2026-02-05 00:45:39.797083 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.56s 2026-02-05 00:45:39.797094 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.23s 2026-02-05 00:45:39.797105 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.69s 2026-02-05 00:45:39.797116 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.40s 2026-02-05 00:45:39.797135 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.19s 2026-02-05 00:45:39.797168 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.13s 2026-02-05 00:45:39.797181 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.01s 2026-02-05 00:45:39.797192 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.97s 2026-02-05 00:45:39.797203 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.93s 2026-02-05 00:45:39.797214 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.18s 2026-02-05 00:45:39.797225 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-02-05 00:45:39.797236 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-02-05 00:45:40.131393 | orchestrator | 2026-02-05 00:45:40.133415 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Feb 5 00:45:40 UTC 2026 2026-02-05 00:45:40.133519 | orchestrator | 2026-02-05 00:45:42.240109 | orchestrator | 2026-02-05 00:45:42 | INFO  | Collection nutshell is prepared for execution 2026-02-05 00:45:42.240202 | orchestrator | 2026-02-05 00:45:42 | INFO  | A [0] - dotfiles 2026-02-05 00:45:52.325551 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [0] - homer 2026-02-05 00:45:52.325636 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [0] - netdata 2026-02-05 00:45:52.325648 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [0] - openstackclient 2026-02-05 00:45:52.325657 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [0] - phpmyadmin 2026-02-05 00:45:52.325675 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [0] - common 2026-02-05 00:45:52.330347 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [1] -- loadbalancer 2026-02-05 00:45:52.330487 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [2] --- opensearch 2026-02-05 00:45:52.330667 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [2] --- mariadb-ng 2026-02-05 00:45:52.331260 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [3] ---- horizon 2026-02-05 00:45:52.331530 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [3] ---- keystone 2026-02-05 00:45:52.331779 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [4] ----- neutron 2026-02-05 00:45:52.332297 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [5] ------ wait-for-nova 2026-02-05 00:45:52.332894 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [6] ------- octavia 2026-02-05 00:45:52.334852 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [4] ----- barbican 2026-02-05 00:45:52.334995 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [4] ----- designate 2026-02-05 00:45:52.335385 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [4] ----- ironic 2026-02-05 00:45:52.335771 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [4] ----- placement 2026-02-05 00:45:52.336204 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [4] ----- magnum 2026-02-05 00:45:52.337506 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [1] -- openvswitch 2026-02-05 00:45:52.337656 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [2] --- ovn 2026-02-05 00:45:52.338399 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [1] -- memcached 2026-02-05 00:45:52.338895 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [1] -- redis 2026-02-05 00:45:52.338912 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [1] -- rabbitmq-ng 2026-02-05 00:45:52.339575 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [0] - kubernetes 2026-02-05 00:45:52.341987 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [1] -- kubeconfig 2026-02-05 00:45:52.342104 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [1] -- copy-kubeconfig 2026-02-05 00:45:52.342693 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [0] - ceph 2026-02-05 00:45:52.345150 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [1] -- ceph-pools 2026-02-05 00:45:52.345171 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [2] --- copy-ceph-keys 2026-02-05 00:45:52.345180 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [3] ---- cephclient 2026-02-05 00:45:52.345652 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-02-05 00:45:52.346708 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [4] ----- wait-for-keystone 2026-02-05 00:45:52.346743 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [5] ------ kolla-ceph-rgw 2026-02-05 00:45:52.346751 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [5] ------ glance 2026-02-05 00:45:52.346758 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [5] ------ cinder 2026-02-05 00:45:52.346765 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [5] ------ nova 2026-02-05 00:45:52.347866 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [4] ----- prometheus 2026-02-05 00:45:52.348667 | orchestrator | 2026-02-05 00:45:52 | INFO  | A [5] ------ grafana 2026-02-05 00:45:52.587930 | orchestrator | 2026-02-05 00:45:52 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-02-05 00:45:52.588600 | orchestrator | 2026-02-05 00:45:52 | INFO  | Tasks are running in the background 2026-02-05 00:45:55.763031 | orchestrator | 2026-02-05 00:45:55 | INFO  | No task IDs specified, wait for all currently running tasks 2026-02-05 00:45:57.882101 | orchestrator | 2026-02-05 00:45:57 | INFO  | Task e3b80169-3b0e-48b8-85d2-3137ffacd9ca is in state STARTED 2026-02-05 00:45:57.884147 | orchestrator | 2026-02-05 00:45:57 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:45:57.884685 | orchestrator | 2026-02-05 00:45:57 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:45:57.885495 | orchestrator | 2026-02-05 00:45:57 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state STARTED 2026-02-05 00:45:57.886803 | orchestrator | 2026-02-05 00:45:57 | INFO  | Task be15e857-0d85-4c34-827e-f7ef21b04c45 is in state STARTED 2026-02-05 00:45:57.887666 | orchestrator | 2026-02-05 00:45:57 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:45:57.890137 | orchestrator | 2026-02-05 00:45:57 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:45:57.890203 | orchestrator | 2026-02-05 00:45:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:00.974942 | orchestrator | 2026-02-05 00:46:00 | INFO  | Task e3b80169-3b0e-48b8-85d2-3137ffacd9ca is in state STARTED 2026-02-05 00:46:00.975021 | orchestrator | 2026-02-05 00:46:00 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:00.975030 | orchestrator | 2026-02-05 00:46:00 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:00.975037 | orchestrator | 2026-02-05 00:46:00 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state STARTED 2026-02-05 00:46:00.975044 | orchestrator | 2026-02-05 00:46:00 | INFO  | Task be15e857-0d85-4c34-827e-f7ef21b04c45 is in state STARTED 2026-02-05 00:46:00.975051 | orchestrator | 2026-02-05 00:46:00 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:00.975058 | orchestrator | 2026-02-05 00:46:00 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:00.975065 | orchestrator | 2026-02-05 00:46:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:03.963885 | orchestrator | 2026-02-05 00:46:03 | INFO  | Task e3b80169-3b0e-48b8-85d2-3137ffacd9ca is in state STARTED 2026-02-05 00:46:03.965875 | orchestrator | 2026-02-05 00:46:03 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:03.966160 | orchestrator | 2026-02-05 00:46:03 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:03.966691 | orchestrator | 2026-02-05 00:46:03 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state STARTED 2026-02-05 00:46:03.968558 | orchestrator | 2026-02-05 00:46:03 | INFO  | Task be15e857-0d85-4c34-827e-f7ef21b04c45 is in state STARTED 2026-02-05 00:46:03.969287 | orchestrator | 2026-02-05 00:46:03 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:03.969659 | orchestrator | 2026-02-05 00:46:03 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:03.970311 | orchestrator | 2026-02-05 00:46:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:07.012713 | orchestrator | 2026-02-05 00:46:07 | INFO  | Task e3b80169-3b0e-48b8-85d2-3137ffacd9ca is in state STARTED 2026-02-05 00:46:07.013306 | orchestrator | 2026-02-05 00:46:07 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:07.021958 | orchestrator | 2026-02-05 00:46:07 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:07.022078 | orchestrator | 2026-02-05 00:46:07 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state STARTED 2026-02-05 00:46:07.022090 | orchestrator | 2026-02-05 00:46:07 | INFO  | Task be15e857-0d85-4c34-827e-f7ef21b04c45 is in state STARTED 2026-02-05 00:46:07.022097 | orchestrator | 2026-02-05 00:46:07 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:07.022104 | orchestrator | 2026-02-05 00:46:07 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:07.022112 | orchestrator | 2026-02-05 00:46:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:10.058991 | orchestrator | 2026-02-05 00:46:10 | INFO  | Task e3b80169-3b0e-48b8-85d2-3137ffacd9ca is in state STARTED 2026-02-05 00:46:10.060805 | orchestrator | 2026-02-05 00:46:10 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:10.064298 | orchestrator | 2026-02-05 00:46:10 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:10.064928 | orchestrator | 2026-02-05 00:46:10 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state STARTED 2026-02-05 00:46:10.069338 | orchestrator | 2026-02-05 00:46:10 | INFO  | Task be15e857-0d85-4c34-827e-f7ef21b04c45 is in state STARTED 2026-02-05 00:46:10.070207 | orchestrator | 2026-02-05 00:46:10 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:10.072068 | orchestrator | 2026-02-05 00:46:10 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:10.072202 | orchestrator | 2026-02-05 00:46:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:13.107844 | orchestrator | 2026-02-05 00:46:13 | INFO  | Task e3b80169-3b0e-48b8-85d2-3137ffacd9ca is in state STARTED 2026-02-05 00:46:13.107934 | orchestrator | 2026-02-05 00:46:13 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:13.111870 | orchestrator | 2026-02-05 00:46:13 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:13.111934 | orchestrator | 2026-02-05 00:46:13 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state STARTED 2026-02-05 00:46:13.114583 | orchestrator | 2026-02-05 00:46:13 | INFO  | Task be15e857-0d85-4c34-827e-f7ef21b04c45 is in state STARTED 2026-02-05 00:46:13.114959 | orchestrator | 2026-02-05 00:46:13 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:13.115768 | orchestrator | 2026-02-05 00:46:13 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:13.115829 | orchestrator | 2026-02-05 00:46:13 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:16.287842 | orchestrator | 2026-02-05 00:46:16 | INFO  | Task e3b80169-3b0e-48b8-85d2-3137ffacd9ca is in state STARTED 2026-02-05 00:46:16.291487 | orchestrator | 2026-02-05 00:46:16 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:16.296612 | orchestrator | 2026-02-05 00:46:16 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:16.296686 | orchestrator | 2026-02-05 00:46:16 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state STARTED 2026-02-05 00:46:16.302613 | orchestrator | 2026-02-05 00:46:16 | INFO  | Task be15e857-0d85-4c34-827e-f7ef21b04c45 is in state STARTED 2026-02-05 00:46:16.302684 | orchestrator | 2026-02-05 00:46:16 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:16.307682 | orchestrator | 2026-02-05 00:46:16 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:16.307778 | orchestrator | 2026-02-05 00:46:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:19.496485 | orchestrator | 2026-02-05 00:46:19 | INFO  | Task e3b80169-3b0e-48b8-85d2-3137ffacd9ca is in state STARTED 2026-02-05 00:46:19.496583 | orchestrator | 2026-02-05 00:46:19 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:19.496589 | orchestrator | 2026-02-05 00:46:19 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:19.496594 | orchestrator | 2026-02-05 00:46:19 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state STARTED 2026-02-05 00:46:19.496599 | orchestrator | 2026-02-05 00:46:19 | INFO  | Task be15e857-0d85-4c34-827e-f7ef21b04c45 is in state STARTED 2026-02-05 00:46:19.496603 | orchestrator | 2026-02-05 00:46:19 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:19.496607 | orchestrator | 2026-02-05 00:46:19 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:19.496612 | orchestrator | 2026-02-05 00:46:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:22.531152 | orchestrator | 2026-02-05 00:46:22.531229 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-02-05 00:46:22.531236 | orchestrator | 2026-02-05 00:46:22.531240 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-02-05 00:46:22.531247 | orchestrator | Thursday 05 February 2026 00:46:06 +0000 (0:00:00.881) 0:00:00.881 ***** 2026-02-05 00:46:22.531253 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:46:22.531261 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:46:22.531267 | orchestrator | changed: [testbed-manager] 2026-02-05 00:46:22.531275 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:46:22.531281 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:46:22.531288 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:46:22.531294 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:46:22.531301 | orchestrator | 2026-02-05 00:46:22.531308 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-02-05 00:46:22.531314 | orchestrator | Thursday 05 February 2026 00:46:09 +0000 (0:00:03.479) 0:00:04.361 ***** 2026-02-05 00:46:22.531329 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-05 00:46:22.531351 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-05 00:46:22.531355 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-05 00:46:22.531360 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-05 00:46:22.531366 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-05 00:46:22.531372 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-05 00:46:22.531378 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-05 00:46:22.531384 | orchestrator | 2026-02-05 00:46:22.531390 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-02-05 00:46:22.531398 | orchestrator | Thursday 05 February 2026 00:46:12 +0000 (0:00:02.575) 0:00:06.937 ***** 2026-02-05 00:46:22.531407 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-05 00:46:10.010108', 'end': '2026-02-05 00:46:10.014654', 'delta': '0:00:00.004546', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-05 00:46:22.531459 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-05 00:46:10.220259', 'end': '2026-02-05 00:46:10.227679', 'delta': '0:00:00.007420', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-05 00:46:22.531465 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-05 00:46:10.667102', 'end': '2026-02-05 00:46:10.672938', 'delta': '0:00:00.005836', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-05 00:46:22.531488 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-05 00:46:11.104066', 'end': '2026-02-05 00:46:11.110979', 'delta': '0:00:00.006913', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-05 00:46:22.531504 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-05 00:46:11.361494', 'end': '2026-02-05 00:46:11.367278', 'delta': '0:00:00.005784', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-05 00:46:22.531509 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-05 00:46:11.576891', 'end': '2026-02-05 00:46:11.582931', 'delta': '0:00:00.006040', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-05 00:46:22.531513 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-05 00:46:11.948161', 'end': '2026-02-05 00:46:11.957811', 'delta': '0:00:00.009650', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-05 00:46:22.531517 | orchestrator | 2026-02-05 00:46:22.531521 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-02-05 00:46:22.531525 | orchestrator | Thursday 05 February 2026 00:46:14 +0000 (0:00:02.473) 0:00:09.410 ***** 2026-02-05 00:46:22.531529 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-05 00:46:22.531533 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-05 00:46:22.531537 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-05 00:46:22.531541 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-05 00:46:22.531545 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-05 00:46:22.531549 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-05 00:46:22.531553 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-05 00:46:22.531556 | orchestrator | 2026-02-05 00:46:22.531560 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-02-05 00:46:22.531564 | orchestrator | Thursday 05 February 2026 00:46:16 +0000 (0:00:02.029) 0:00:11.439 ***** 2026-02-05 00:46:22.531568 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-02-05 00:46:22.531572 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-02-05 00:46:22.531576 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-02-05 00:46:22.531580 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-02-05 00:46:22.531586 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-02-05 00:46:22.531596 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-02-05 00:46:22.531602 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-02-05 00:46:22.531608 | orchestrator | 2026-02-05 00:46:22.531614 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:46:22.531627 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:46:22.531635 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:46:22.531642 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:46:22.531649 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:46:22.531658 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:46:22.531665 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:46:22.531672 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:46:22.531678 | orchestrator | 2026-02-05 00:46:22.531684 | orchestrator | 2026-02-05 00:46:22.531690 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:46:22.531697 | orchestrator | Thursday 05 February 2026 00:46:20 +0000 (0:00:03.570) 0:00:15.010 ***** 2026-02-05 00:46:22.531704 | orchestrator | =============================================================================== 2026-02-05 00:46:22.531711 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.57s 2026-02-05 00:46:22.531718 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.48s 2026-02-05 00:46:22.531725 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.58s 2026-02-05 00:46:22.531733 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.47s 2026-02-05 00:46:22.531740 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.03s 2026-02-05 00:46:22.531747 | orchestrator | 2026-02-05 00:46:22 | INFO  | Task e3b80169-3b0e-48b8-85d2-3137ffacd9ca is in state SUCCESS 2026-02-05 00:46:22.531755 | orchestrator | 2026-02-05 00:46:22 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:22.531762 | orchestrator | 2026-02-05 00:46:22 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:22.531770 | orchestrator | 2026-02-05 00:46:22 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state STARTED 2026-02-05 00:46:22.531777 | orchestrator | 2026-02-05 00:46:22 | INFO  | Task be15e857-0d85-4c34-827e-f7ef21b04c45 is in state STARTED 2026-02-05 00:46:22.531784 | orchestrator | 2026-02-05 00:46:22 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:22.531792 | orchestrator | 2026-02-05 00:46:22 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:46:22.534887 | orchestrator | 2026-02-05 00:46:22 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:22.534940 | orchestrator | 2026-02-05 00:46:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:26.021592 | orchestrator | 2026-02-05 00:46:26 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:26.021644 | orchestrator | 2026-02-05 00:46:26 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:26.021667 | orchestrator | 2026-02-05 00:46:26 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state STARTED 2026-02-05 00:46:26.021674 | orchestrator | 2026-02-05 00:46:26 | INFO  | Task be15e857-0d85-4c34-827e-f7ef21b04c45 is in state STARTED 2026-02-05 00:46:26.021680 | orchestrator | 2026-02-05 00:46:26 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:26.021687 | orchestrator | 2026-02-05 00:46:26 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:46:26.021693 | orchestrator | 2026-02-05 00:46:26 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:26.021700 | orchestrator | 2026-02-05 00:46:26 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:29.071243 | orchestrator | 2026-02-05 00:46:29 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:29.071319 | orchestrator | 2026-02-05 00:46:29 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:29.071325 | orchestrator | 2026-02-05 00:46:29 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state STARTED 2026-02-05 00:46:29.071330 | orchestrator | 2026-02-05 00:46:29 | INFO  | Task be15e857-0d85-4c34-827e-f7ef21b04c45 is in state STARTED 2026-02-05 00:46:29.071335 | orchestrator | 2026-02-05 00:46:29 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:29.071339 | orchestrator | 2026-02-05 00:46:29 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:46:29.071343 | orchestrator | 2026-02-05 00:46:29 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:29.071347 | orchestrator | 2026-02-05 00:46:29 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:32.117607 | orchestrator | 2026-02-05 00:46:32 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:32.117970 | orchestrator | 2026-02-05 00:46:32 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:32.118412 | orchestrator | 2026-02-05 00:46:32 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state STARTED 2026-02-05 00:46:32.120380 | orchestrator | 2026-02-05 00:46:32 | INFO  | Task be15e857-0d85-4c34-827e-f7ef21b04c45 is in state STARTED 2026-02-05 00:46:32.122358 | orchestrator | 2026-02-05 00:46:32 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:32.123465 | orchestrator | 2026-02-05 00:46:32 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:46:32.123731 | orchestrator | 2026-02-05 00:46:32 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:32.123760 | orchestrator | 2026-02-05 00:46:32 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:35.165855 | orchestrator | 2026-02-05 00:46:35 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:35.166119 | orchestrator | 2026-02-05 00:46:35 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:35.166674 | orchestrator | 2026-02-05 00:46:35 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state STARTED 2026-02-05 00:46:35.167605 | orchestrator | 2026-02-05 00:46:35 | INFO  | Task be15e857-0d85-4c34-827e-f7ef21b04c45 is in state STARTED 2026-02-05 00:46:35.168011 | orchestrator | 2026-02-05 00:46:35 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:35.168526 | orchestrator | 2026-02-05 00:46:35 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:46:35.169451 | orchestrator | 2026-02-05 00:46:35 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:35.169499 | orchestrator | 2026-02-05 00:46:35 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:38.263253 | orchestrator | 2026-02-05 00:46:38 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:38.267701 | orchestrator | 2026-02-05 00:46:38 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:38.268806 | orchestrator | 2026-02-05 00:46:38 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state STARTED 2026-02-05 00:46:38.271118 | orchestrator | 2026-02-05 00:46:38 | INFO  | Task be15e857-0d85-4c34-827e-f7ef21b04c45 is in state STARTED 2026-02-05 00:46:38.272598 | orchestrator | 2026-02-05 00:46:38 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:38.276302 | orchestrator | 2026-02-05 00:46:38 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:46:38.277375 | orchestrator | 2026-02-05 00:46:38 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:38.277458 | orchestrator | 2026-02-05 00:46:38 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:41.324774 | orchestrator | 2026-02-05 00:46:41 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:41.325242 | orchestrator | 2026-02-05 00:46:41 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:41.326059 | orchestrator | 2026-02-05 00:46:41 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state STARTED 2026-02-05 00:46:41.328975 | orchestrator | 2026-02-05 00:46:41 | INFO  | Task be15e857-0d85-4c34-827e-f7ef21b04c45 is in state STARTED 2026-02-05 00:46:41.329840 | orchestrator | 2026-02-05 00:46:41 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:41.331347 | orchestrator | 2026-02-05 00:46:41 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:46:41.335098 | orchestrator | 2026-02-05 00:46:41 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:41.335168 | orchestrator | 2026-02-05 00:46:41 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:44.385807 | orchestrator | 2026-02-05 00:46:44 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:44.385908 | orchestrator | 2026-02-05 00:46:44 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:44.389568 | orchestrator | 2026-02-05 00:46:44 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state STARTED 2026-02-05 00:46:44.389652 | orchestrator | 2026-02-05 00:46:44 | INFO  | Task be15e857-0d85-4c34-827e-f7ef21b04c45 is in state STARTED 2026-02-05 00:46:44.471648 | orchestrator | 2026-02-05 00:46:44 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:44.471719 | orchestrator | 2026-02-05 00:46:44 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:46:44.471727 | orchestrator | 2026-02-05 00:46:44 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:44.471733 | orchestrator | 2026-02-05 00:46:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:47.518690 | orchestrator | 2026-02-05 00:46:47 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:47.518763 | orchestrator | 2026-02-05 00:46:47 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:47.518787 | orchestrator | 2026-02-05 00:46:47 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state STARTED 2026-02-05 00:46:47.518792 | orchestrator | 2026-02-05 00:46:47 | INFO  | Task be15e857-0d85-4c34-827e-f7ef21b04c45 is in state SUCCESS 2026-02-05 00:46:47.518796 | orchestrator | 2026-02-05 00:46:47 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:47.518800 | orchestrator | 2026-02-05 00:46:47 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:46:47.518804 | orchestrator | 2026-02-05 00:46:47 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:47.518808 | orchestrator | 2026-02-05 00:46:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:50.548706 | orchestrator | 2026-02-05 00:46:50 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:50.548783 | orchestrator | 2026-02-05 00:46:50 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:50.548914 | orchestrator | 2026-02-05 00:46:50 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state STARTED 2026-02-05 00:46:50.572332 | orchestrator | 2026-02-05 00:46:50 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:50.577492 | orchestrator | 2026-02-05 00:46:50 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:46:50.579225 | orchestrator | 2026-02-05 00:46:50 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:50.579284 | orchestrator | 2026-02-05 00:46:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:53.614495 | orchestrator | 2026-02-05 00:46:53 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:53.617871 | orchestrator | 2026-02-05 00:46:53 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:53.618617 | orchestrator | 2026-02-05 00:46:53 | INFO  | Task c1e1103e-1a3d-47eb-9bc7-2ea074ad8923 is in state SUCCESS 2026-02-05 00:46:53.621651 | orchestrator | 2026-02-05 00:46:53 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:53.625812 | orchestrator | 2026-02-05 00:46:53 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:46:53.628610 | orchestrator | 2026-02-05 00:46:53 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:53.628700 | orchestrator | 2026-02-05 00:46:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:56.662166 | orchestrator | 2026-02-05 00:46:56 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:56.664716 | orchestrator | 2026-02-05 00:46:56 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:56.665858 | orchestrator | 2026-02-05 00:46:56 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:56.666754 | orchestrator | 2026-02-05 00:46:56 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:46:56.667713 | orchestrator | 2026-02-05 00:46:56 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:56.667799 | orchestrator | 2026-02-05 00:46:56 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:46:59.697269 | orchestrator | 2026-02-05 00:46:59 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:46:59.698379 | orchestrator | 2026-02-05 00:46:59 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:46:59.699797 | orchestrator | 2026-02-05 00:46:59 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:46:59.700866 | orchestrator | 2026-02-05 00:46:59 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:46:59.703797 | orchestrator | 2026-02-05 00:46:59 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:46:59.703838 | orchestrator | 2026-02-05 00:46:59 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:02.790695 | orchestrator | 2026-02-05 00:47:02 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:02.804594 | orchestrator | 2026-02-05 00:47:02 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:47:02.810653 | orchestrator | 2026-02-05 00:47:02 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:02.819925 | orchestrator | 2026-02-05 00:47:02 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:47:02.827595 | orchestrator | 2026-02-05 00:47:02 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:02.829518 | orchestrator | 2026-02-05 00:47:02 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:05.969537 | orchestrator | 2026-02-05 00:47:05 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:05.969744 | orchestrator | 2026-02-05 00:47:05 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:47:05.970769 | orchestrator | 2026-02-05 00:47:05 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:05.971285 | orchestrator | 2026-02-05 00:47:05 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:47:05.973734 | orchestrator | 2026-02-05 00:47:05 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:05.973784 | orchestrator | 2026-02-05 00:47:05 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:09.023570 | orchestrator | 2026-02-05 00:47:09 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:09.025961 | orchestrator | 2026-02-05 00:47:09 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:47:09.039971 | orchestrator | 2026-02-05 00:47:09 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:09.040057 | orchestrator | 2026-02-05 00:47:09 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:47:09.040066 | orchestrator | 2026-02-05 00:47:09 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:09.040075 | orchestrator | 2026-02-05 00:47:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:12.108499 | orchestrator | 2026-02-05 00:47:12 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:12.109338 | orchestrator | 2026-02-05 00:47:12 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:47:12.112769 | orchestrator | 2026-02-05 00:47:12 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:12.113746 | orchestrator | 2026-02-05 00:47:12 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:47:12.114373 | orchestrator | 2026-02-05 00:47:12 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:12.114388 | orchestrator | 2026-02-05 00:47:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:15.149696 | orchestrator | 2026-02-05 00:47:15 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:15.151669 | orchestrator | 2026-02-05 00:47:15 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:47:15.152262 | orchestrator | 2026-02-05 00:47:15 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:15.154383 | orchestrator | 2026-02-05 00:47:15 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:47:15.155886 | orchestrator | 2026-02-05 00:47:15 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:15.155918 | orchestrator | 2026-02-05 00:47:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:18.209061 | orchestrator | 2026-02-05 00:47:18 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:18.209604 | orchestrator | 2026-02-05 00:47:18 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state STARTED 2026-02-05 00:47:18.211101 | orchestrator | 2026-02-05 00:47:18 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:18.212009 | orchestrator | 2026-02-05 00:47:18 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:47:18.213728 | orchestrator | 2026-02-05 00:47:18 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:18.213786 | orchestrator | 2026-02-05 00:47:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:21.290917 | orchestrator | 2026-02-05 00:47:21 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:21.291219 | orchestrator | 2026-02-05 00:47:21 | INFO  | Task d9e06aff-bb41-4224-b329-099985db327b is in state SUCCESS 2026-02-05 00:47:21.293390 | orchestrator | 2026-02-05 00:47:21.293529 | orchestrator | 2026-02-05 00:47:21.293541 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-02-05 00:47:21.293550 | orchestrator | 2026-02-05 00:47:21.293557 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-02-05 00:47:21.293565 | orchestrator | Thursday 05 February 2026 00:46:05 +0000 (0:00:00.624) 0:00:00.624 ***** 2026-02-05 00:47:21.293596 | orchestrator | ok: [testbed-manager] => { 2026-02-05 00:47:21.293608 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-02-05 00:47:21.293616 | orchestrator | } 2026-02-05 00:47:21.293623 | orchestrator | 2026-02-05 00:47:21.293630 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-02-05 00:47:21.293637 | orchestrator | Thursday 05 February 2026 00:46:06 +0000 (0:00:00.362) 0:00:00.987 ***** 2026-02-05 00:47:21.293644 | orchestrator | ok: [testbed-manager] 2026-02-05 00:47:21.293651 | orchestrator | 2026-02-05 00:47:21.293658 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-02-05 00:47:21.293664 | orchestrator | Thursday 05 February 2026 00:46:07 +0000 (0:00:01.556) 0:00:02.543 ***** 2026-02-05 00:47:21.293672 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-02-05 00:47:21.293686 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-02-05 00:47:21.293693 | orchestrator | 2026-02-05 00:47:21.293699 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-02-05 00:47:21.293706 | orchestrator | Thursday 05 February 2026 00:46:09 +0000 (0:00:01.742) 0:00:04.285 ***** 2026-02-05 00:47:21.293714 | orchestrator | changed: [testbed-manager] 2026-02-05 00:47:21.293720 | orchestrator | 2026-02-05 00:47:21.293726 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-02-05 00:47:21.293732 | orchestrator | Thursday 05 February 2026 00:46:13 +0000 (0:00:04.325) 0:00:08.611 ***** 2026-02-05 00:47:21.293758 | orchestrator | changed: [testbed-manager] 2026-02-05 00:47:21.293766 | orchestrator | 2026-02-05 00:47:21.293773 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-02-05 00:47:21.293779 | orchestrator | Thursday 05 February 2026 00:46:15 +0000 (0:00:01.817) 0:00:10.429 ***** 2026-02-05 00:47:21.293786 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-02-05 00:47:21.293793 | orchestrator | ok: [testbed-manager] 2026-02-05 00:47:21.293800 | orchestrator | 2026-02-05 00:47:21.293806 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-02-05 00:47:21.293813 | orchestrator | Thursday 05 February 2026 00:46:44 +0000 (0:00:29.197) 0:00:39.626 ***** 2026-02-05 00:47:21.293820 | orchestrator | changed: [testbed-manager] 2026-02-05 00:47:21.293827 | orchestrator | 2026-02-05 00:47:21.293833 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:47:21.293839 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:47:21.293847 | orchestrator | 2026-02-05 00:47:21.293853 | orchestrator | 2026-02-05 00:47:21.293860 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:47:21.293867 | orchestrator | Thursday 05 February 2026 00:46:46 +0000 (0:00:01.885) 0:00:41.512 ***** 2026-02-05 00:47:21.293874 | orchestrator | =============================================================================== 2026-02-05 00:47:21.293881 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 29.20s 2026-02-05 00:47:21.293888 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.33s 2026-02-05 00:47:21.293895 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.89s 2026-02-05 00:47:21.293901 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.82s 2026-02-05 00:47:21.293907 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.74s 2026-02-05 00:47:21.293914 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.56s 2026-02-05 00:47:21.293921 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.36s 2026-02-05 00:47:21.293929 | orchestrator | 2026-02-05 00:47:21.293935 | orchestrator | 2026-02-05 00:47:21.293942 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-05 00:47:21.293949 | orchestrator | 2026-02-05 00:47:21.293955 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-05 00:47:21.293961 | orchestrator | Thursday 05 February 2026 00:46:06 +0000 (0:00:00.870) 0:00:00.870 ***** 2026-02-05 00:47:21.293967 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-05 00:47:21.293976 | orchestrator | 2026-02-05 00:47:21.293982 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-05 00:47:21.293988 | orchestrator | Thursday 05 February 2026 00:46:06 +0000 (0:00:00.337) 0:00:01.208 ***** 2026-02-05 00:47:21.293995 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-05 00:47:21.294001 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-05 00:47:21.294010 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-05 00:47:21.294094 | orchestrator | 2026-02-05 00:47:21.294103 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-05 00:47:21.294111 | orchestrator | Thursday 05 February 2026 00:46:09 +0000 (0:00:02.332) 0:00:03.541 ***** 2026-02-05 00:47:21.294119 | orchestrator | changed: [testbed-manager] 2026-02-05 00:47:21.294127 | orchestrator | 2026-02-05 00:47:21.294135 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-05 00:47:21.294143 | orchestrator | Thursday 05 February 2026 00:46:12 +0000 (0:00:03.320) 0:00:06.861 ***** 2026-02-05 00:47:21.294171 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-05 00:47:21.294190 | orchestrator | ok: [testbed-manager] 2026-02-05 00:47:21.294198 | orchestrator | 2026-02-05 00:47:21.294205 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-05 00:47:21.294212 | orchestrator | Thursday 05 February 2026 00:46:47 +0000 (0:00:34.446) 0:00:41.308 ***** 2026-02-05 00:47:21.294219 | orchestrator | changed: [testbed-manager] 2026-02-05 00:47:21.294225 | orchestrator | 2026-02-05 00:47:21.294232 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-05 00:47:21.294239 | orchestrator | Thursday 05 February 2026 00:46:48 +0000 (0:00:01.277) 0:00:42.585 ***** 2026-02-05 00:47:21.294247 | orchestrator | ok: [testbed-manager] 2026-02-05 00:47:21.294254 | orchestrator | 2026-02-05 00:47:21.294261 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-05 00:47:21.294269 | orchestrator | Thursday 05 February 2026 00:46:49 +0000 (0:00:00.670) 0:00:43.256 ***** 2026-02-05 00:47:21.294276 | orchestrator | changed: [testbed-manager] 2026-02-05 00:47:21.294283 | orchestrator | 2026-02-05 00:47:21.294290 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-05 00:47:21.294296 | orchestrator | Thursday 05 February 2026 00:46:50 +0000 (0:00:01.795) 0:00:45.052 ***** 2026-02-05 00:47:21.294303 | orchestrator | changed: [testbed-manager] 2026-02-05 00:47:21.294308 | orchestrator | 2026-02-05 00:47:21.294320 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-05 00:47:21.294326 | orchestrator | Thursday 05 February 2026 00:46:51 +0000 (0:00:01.031) 0:00:46.083 ***** 2026-02-05 00:47:21.294333 | orchestrator | changed: [testbed-manager] 2026-02-05 00:47:21.294340 | orchestrator | 2026-02-05 00:47:21.294346 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-05 00:47:21.294352 | orchestrator | Thursday 05 February 2026 00:46:52 +0000 (0:00:00.775) 0:00:46.858 ***** 2026-02-05 00:47:21.294358 | orchestrator | ok: [testbed-manager] 2026-02-05 00:47:21.294365 | orchestrator | 2026-02-05 00:47:21.294371 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:47:21.294378 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:47:21.294384 | orchestrator | 2026-02-05 00:47:21.294391 | orchestrator | 2026-02-05 00:47:21.294398 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:47:21.294483 | orchestrator | Thursday 05 February 2026 00:46:52 +0000 (0:00:00.378) 0:00:47.237 ***** 2026-02-05 00:47:21.294491 | orchestrator | =============================================================================== 2026-02-05 00:47:21.294497 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.45s 2026-02-05 00:47:21.294505 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.32s 2026-02-05 00:47:21.294511 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.33s 2026-02-05 00:47:21.294518 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.80s 2026-02-05 00:47:21.294525 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.28s 2026-02-05 00:47:21.294531 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.03s 2026-02-05 00:47:21.294538 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.78s 2026-02-05 00:47:21.294544 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.67s 2026-02-05 00:47:21.294551 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.38s 2026-02-05 00:47:21.294558 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.34s 2026-02-05 00:47:21.294564 | orchestrator | 2026-02-05 00:47:21.294570 | orchestrator | 2026-02-05 00:47:21.294577 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:47:21.294583 | orchestrator | 2026-02-05 00:47:21.294589 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:47:21.294604 | orchestrator | Thursday 05 February 2026 00:46:04 +0000 (0:00:00.535) 0:00:00.535 ***** 2026-02-05 00:47:21.294610 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-05 00:47:21.294617 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-05 00:47:21.294624 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-05 00:47:21.294631 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-05 00:47:21.294637 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-05 00:47:21.294643 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-05 00:47:21.294650 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-05 00:47:21.294656 | orchestrator | 2026-02-05 00:47:21.294663 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-05 00:47:21.294669 | orchestrator | 2026-02-05 00:47:21.294676 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-05 00:47:21.294682 | orchestrator | Thursday 05 February 2026 00:46:06 +0000 (0:00:02.120) 0:00:02.656 ***** 2026-02-05 00:47:21.294699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:47:21.294709 | orchestrator | 2026-02-05 00:47:21.294716 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-05 00:47:21.294723 | orchestrator | Thursday 05 February 2026 00:46:08 +0000 (0:00:02.373) 0:00:05.029 ***** 2026-02-05 00:47:21.294729 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:47:21.294736 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:47:21.294742 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:47:21.294749 | orchestrator | ok: [testbed-manager] 2026-02-05 00:47:21.294756 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:47:21.294771 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:47:21.294778 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:47:21.294784 | orchestrator | 2026-02-05 00:47:21.294790 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-05 00:47:21.294797 | orchestrator | Thursday 05 February 2026 00:46:10 +0000 (0:00:01.547) 0:00:06.576 ***** 2026-02-05 00:47:21.294803 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:47:21.294808 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:47:21.294814 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:47:21.294820 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:47:21.294826 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:47:21.294832 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:47:21.294838 | orchestrator | ok: [testbed-manager] 2026-02-05 00:47:21.294844 | orchestrator | 2026-02-05 00:47:21.294851 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-05 00:47:21.294858 | orchestrator | Thursday 05 February 2026 00:46:13 +0000 (0:00:02.915) 0:00:09.492 ***** 2026-02-05 00:47:21.294864 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:47:21.294871 | orchestrator | changed: [testbed-manager] 2026-02-05 00:47:21.294878 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:47:21.294884 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:47:21.294890 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:47:21.294897 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:47:21.294903 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:47:21.294909 | orchestrator | 2026-02-05 00:47:21.294920 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-05 00:47:21.294927 | orchestrator | Thursday 05 February 2026 00:46:15 +0000 (0:00:02.262) 0:00:11.755 ***** 2026-02-05 00:47:21.294934 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:47:21.294940 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:47:21.294946 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:47:21.294952 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:47:21.294966 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:47:21.294972 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:47:21.294979 | orchestrator | changed: [testbed-manager] 2026-02-05 00:47:21.294985 | orchestrator | 2026-02-05 00:47:21.294992 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-05 00:47:21.294999 | orchestrator | Thursday 05 February 2026 00:46:25 +0000 (0:00:10.385) 0:00:22.140 ***** 2026-02-05 00:47:21.295005 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:47:21.295012 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:47:21.295018 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:47:21.295024 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:47:21.295030 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:47:21.295037 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:47:21.295044 | orchestrator | changed: [testbed-manager] 2026-02-05 00:47:21.295050 | orchestrator | 2026-02-05 00:47:21.295056 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-05 00:47:21.295063 | orchestrator | Thursday 05 February 2026 00:47:02 +0000 (0:00:36.256) 0:00:58.397 ***** 2026-02-05 00:47:21.295070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:47:21.295079 | orchestrator | 2026-02-05 00:47:21.295085 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-05 00:47:21.295092 | orchestrator | Thursday 05 February 2026 00:47:03 +0000 (0:00:01.881) 0:01:00.278 ***** 2026-02-05 00:47:21.295098 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-05 00:47:21.295104 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-05 00:47:21.295111 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-05 00:47:21.295117 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-05 00:47:21.295122 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-05 00:47:21.295128 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-05 00:47:21.295134 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-05 00:47:21.295141 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-05 00:47:21.295147 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-05 00:47:21.295152 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-05 00:47:21.295158 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-05 00:47:21.295165 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-05 00:47:21.295172 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-05 00:47:21.295179 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-05 00:47:21.295185 | orchestrator | 2026-02-05 00:47:21.295192 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-05 00:47:21.295200 | orchestrator | Thursday 05 February 2026 00:47:08 +0000 (0:00:04.417) 0:01:04.696 ***** 2026-02-05 00:47:21.295206 | orchestrator | ok: [testbed-manager] 2026-02-05 00:47:21.295213 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:47:21.295219 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:47:21.295226 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:47:21.295232 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:47:21.295237 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:47:21.295243 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:47:21.295249 | orchestrator | 2026-02-05 00:47:21.295255 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-05 00:47:21.295261 | orchestrator | Thursday 05 February 2026 00:47:09 +0000 (0:00:01.159) 0:01:05.855 ***** 2026-02-05 00:47:21.295267 | orchestrator | changed: [testbed-manager] 2026-02-05 00:47:21.295273 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:47:21.295279 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:47:21.295285 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:47:21.295298 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:47:21.295304 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:47:21.295311 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:47:21.295317 | orchestrator | 2026-02-05 00:47:21.295324 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-05 00:47:21.295342 | orchestrator | Thursday 05 February 2026 00:47:11 +0000 (0:00:01.520) 0:01:07.376 ***** 2026-02-05 00:47:21.295521 | orchestrator | ok: [testbed-manager] 2026-02-05 00:47:21.295538 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:47:21.295544 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:47:21.295551 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:47:21.295557 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:47:21.295563 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:47:21.295570 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:47:21.295576 | orchestrator | 2026-02-05 00:47:21.295582 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-05 00:47:21.295589 | orchestrator | Thursday 05 February 2026 00:47:12 +0000 (0:00:01.381) 0:01:08.758 ***** 2026-02-05 00:47:21.295596 | orchestrator | ok: [testbed-manager] 2026-02-05 00:47:21.295602 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:47:21.295609 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:47:21.295616 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:47:21.295622 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:47:21.295629 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:47:21.295636 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:47:21.295642 | orchestrator | 2026-02-05 00:47:21.295648 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-05 00:47:21.295655 | orchestrator | Thursday 05 February 2026 00:47:14 +0000 (0:00:01.976) 0:01:10.734 ***** 2026-02-05 00:47:21.295670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-05 00:47:21.295681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:47:21.295689 | orchestrator | 2026-02-05 00:47:21.295710 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-05 00:47:21.295726 | orchestrator | Thursday 05 February 2026 00:47:15 +0000 (0:00:01.437) 0:01:12.171 ***** 2026-02-05 00:47:21.295734 | orchestrator | changed: [testbed-manager] 2026-02-05 00:47:21.295741 | orchestrator | 2026-02-05 00:47:21.295748 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-05 00:47:21.295755 | orchestrator | Thursday 05 February 2026 00:47:17 +0000 (0:00:01.729) 0:01:13.900 ***** 2026-02-05 00:47:21.295762 | orchestrator | changed: [testbed-manager] 2026-02-05 00:47:21.295768 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:47:21.295775 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:47:21.295783 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:47:21.295790 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:47:21.295796 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:47:21.295802 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:47:21.295809 | orchestrator | 2026-02-05 00:47:21.295817 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:47:21.295824 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:47:21.295832 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:47:21.295839 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:47:21.295845 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:47:21.295860 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:47:21.295866 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:47:21.295872 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:47:21.295879 | orchestrator | 2026-02-05 00:47:21.295884 | orchestrator | 2026-02-05 00:47:21.295891 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:47:21.295897 | orchestrator | Thursday 05 February 2026 00:47:20 +0000 (0:00:03.239) 0:01:17.140 ***** 2026-02-05 00:47:21.295903 | orchestrator | =============================================================================== 2026-02-05 00:47:21.295908 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 36.26s 2026-02-05 00:47:21.295915 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.39s 2026-02-05 00:47:21.295922 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.42s 2026-02-05 00:47:21.295928 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.24s 2026-02-05 00:47:21.295934 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.92s 2026-02-05 00:47:21.295940 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.37s 2026-02-05 00:47:21.295946 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.26s 2026-02-05 00:47:21.295952 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.12s 2026-02-05 00:47:21.295959 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.98s 2026-02-05 00:47:21.295965 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.88s 2026-02-05 00:47:21.295971 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.73s 2026-02-05 00:47:21.295987 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.55s 2026-02-05 00:47:21.295994 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.52s 2026-02-05 00:47:21.296000 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.44s 2026-02-05 00:47:21.296008 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.38s 2026-02-05 00:47:21.296015 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.16s 2026-02-05 00:47:21.296022 | orchestrator | 2026-02-05 00:47:21 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:21.296029 | orchestrator | 2026-02-05 00:47:21 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:47:21.307692 | orchestrator | 2026-02-05 00:47:21 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:21.307779 | orchestrator | 2026-02-05 00:47:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:24.336081 | orchestrator | 2026-02-05 00:47:24 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:24.337450 | orchestrator | 2026-02-05 00:47:24 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:24.339869 | orchestrator | 2026-02-05 00:47:24 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:47:24.339921 | orchestrator | 2026-02-05 00:47:24 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:24.339931 | orchestrator | 2026-02-05 00:47:24 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:27.375553 | orchestrator | 2026-02-05 00:47:27 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:27.380009 | orchestrator | 2026-02-05 00:47:27 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:27.380782 | orchestrator | 2026-02-05 00:47:27 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state STARTED 2026-02-05 00:47:27.384656 | orchestrator | 2026-02-05 00:47:27 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:27.384710 | orchestrator | 2026-02-05 00:47:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:30.440022 | orchestrator | 2026-02-05 00:47:30 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:30.441816 | orchestrator | 2026-02-05 00:47:30 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:30.442356 | orchestrator | 2026-02-05 00:47:30 | INFO  | Task 60ad3f59-0053-454d-96e4-cc1c39c00585 is in state SUCCESS 2026-02-05 00:47:30.445568 | orchestrator | 2026-02-05 00:47:30 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:30.445613 | orchestrator | 2026-02-05 00:47:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:33.508678 | orchestrator | 2026-02-05 00:47:33 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:33.510996 | orchestrator | 2026-02-05 00:47:33 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:33.516433 | orchestrator | 2026-02-05 00:47:33 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:33.516492 | orchestrator | 2026-02-05 00:47:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:36.564353 | orchestrator | 2026-02-05 00:47:36 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:36.565465 | orchestrator | 2026-02-05 00:47:36 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:36.568584 | orchestrator | 2026-02-05 00:47:36 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:36.568628 | orchestrator | 2026-02-05 00:47:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:39.611446 | orchestrator | 2026-02-05 00:47:39 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:39.613646 | orchestrator | 2026-02-05 00:47:39 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:39.615150 | orchestrator | 2026-02-05 00:47:39 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:39.615224 | orchestrator | 2026-02-05 00:47:39 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:42.653628 | orchestrator | 2026-02-05 00:47:42 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:42.654352 | orchestrator | 2026-02-05 00:47:42 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:42.655543 | orchestrator | 2026-02-05 00:47:42 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:42.655570 | orchestrator | 2026-02-05 00:47:42 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:45.691371 | orchestrator | 2026-02-05 00:47:45 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:45.692150 | orchestrator | 2026-02-05 00:47:45 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:45.693197 | orchestrator | 2026-02-05 00:47:45 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:45.693231 | orchestrator | 2026-02-05 00:47:45 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:48.740505 | orchestrator | 2026-02-05 00:47:48 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:48.741295 | orchestrator | 2026-02-05 00:47:48 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:48.743050 | orchestrator | 2026-02-05 00:47:48 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:48.743115 | orchestrator | 2026-02-05 00:47:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:51.786100 | orchestrator | 2026-02-05 00:47:51 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:51.788467 | orchestrator | 2026-02-05 00:47:51 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:51.789283 | orchestrator | 2026-02-05 00:47:51 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:51.789338 | orchestrator | 2026-02-05 00:47:51 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:54.826123 | orchestrator | 2026-02-05 00:47:54 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:54.828692 | orchestrator | 2026-02-05 00:47:54 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:54.828740 | orchestrator | 2026-02-05 00:47:54 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:54.828749 | orchestrator | 2026-02-05 00:47:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:47:57.870773 | orchestrator | 2026-02-05 00:47:57 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:47:57.872442 | orchestrator | 2026-02-05 00:47:57 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:47:57.874187 | orchestrator | 2026-02-05 00:47:57 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:47:57.874439 | orchestrator | 2026-02-05 00:47:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:00.911818 | orchestrator | 2026-02-05 00:48:00 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:00.911895 | orchestrator | 2026-02-05 00:48:00 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:48:00.912078 | orchestrator | 2026-02-05 00:48:00 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:00.912186 | orchestrator | 2026-02-05 00:48:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:03.955701 | orchestrator | 2026-02-05 00:48:03 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:03.956233 | orchestrator | 2026-02-05 00:48:03 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:48:03.957442 | orchestrator | 2026-02-05 00:48:03 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:03.957501 | orchestrator | 2026-02-05 00:48:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:07.007745 | orchestrator | 2026-02-05 00:48:07 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:07.013271 | orchestrator | 2026-02-05 00:48:07 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state STARTED 2026-02-05 00:48:07.016762 | orchestrator | 2026-02-05 00:48:07 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:07.016815 | orchestrator | 2026-02-05 00:48:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:10.058569 | orchestrator | 2026-02-05 00:48:10 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:10.061864 | orchestrator | 2026-02-05 00:48:10 | INFO  | Task a1027ee8-3d5a-4b85-9d95-9792f42cc824 is in state SUCCESS 2026-02-05 00:48:10.063439 | orchestrator | 2026-02-05 00:48:10.063492 | orchestrator | 2026-02-05 00:48:10.063500 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-02-05 00:48:10.063508 | orchestrator | 2026-02-05 00:48:10.063514 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-02-05 00:48:10.063521 | orchestrator | Thursday 05 February 2026 00:46:27 +0000 (0:00:00.697) 0:00:00.697 ***** 2026-02-05 00:48:10.063527 | orchestrator | ok: [testbed-manager] 2026-02-05 00:48:10.063535 | orchestrator | 2026-02-05 00:48:10.063542 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-02-05 00:48:10.063549 | orchestrator | Thursday 05 February 2026 00:46:28 +0000 (0:00:01.098) 0:00:01.796 ***** 2026-02-05 00:48:10.063556 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-02-05 00:48:10.063563 | orchestrator | 2026-02-05 00:48:10.063570 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-02-05 00:48:10.063576 | orchestrator | Thursday 05 February 2026 00:46:29 +0000 (0:00:01.232) 0:00:03.029 ***** 2026-02-05 00:48:10.063583 | orchestrator | changed: [testbed-manager] 2026-02-05 00:48:10.063589 | orchestrator | 2026-02-05 00:48:10.063596 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-02-05 00:48:10.063608 | orchestrator | Thursday 05 February 2026 00:46:31 +0000 (0:00:02.195) 0:00:05.224 ***** 2026-02-05 00:48:10.063615 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-02-05 00:48:10.063622 | orchestrator | ok: [testbed-manager] 2026-02-05 00:48:10.063629 | orchestrator | 2026-02-05 00:48:10.063635 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-02-05 00:48:10.063727 | orchestrator | Thursday 05 February 2026 00:47:18 +0000 (0:00:47.457) 0:00:52.681 ***** 2026-02-05 00:48:10.063735 | orchestrator | changed: [testbed-manager] 2026-02-05 00:48:10.063741 | orchestrator | 2026-02-05 00:48:10.063748 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:48:10.063754 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:48:10.063761 | orchestrator | 2026-02-05 00:48:10.063768 | orchestrator | 2026-02-05 00:48:10.063774 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:48:10.063781 | orchestrator | Thursday 05 February 2026 00:47:30 +0000 (0:00:11.106) 0:01:03.788 ***** 2026-02-05 00:48:10.063787 | orchestrator | =============================================================================== 2026-02-05 00:48:10.063794 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 47.46s 2026-02-05 00:48:10.063800 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 11.11s 2026-02-05 00:48:10.063806 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.19s 2026-02-05 00:48:10.063812 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.23s 2026-02-05 00:48:10.063819 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.10s 2026-02-05 00:48:10.063825 | orchestrator | 2026-02-05 00:48:10.063831 | orchestrator | 2026-02-05 00:48:10.063837 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-05 00:48:10.063843 | orchestrator | 2026-02-05 00:48:10.064165 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-05 00:48:10.064179 | orchestrator | Thursday 05 February 2026 00:45:57 +0000 (0:00:00.225) 0:00:00.225 ***** 2026-02-05 00:48:10.064185 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:48:10.064208 | orchestrator | 2026-02-05 00:48:10.064214 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-05 00:48:10.064221 | orchestrator | Thursday 05 February 2026 00:45:58 +0000 (0:00:01.099) 0:00:01.325 ***** 2026-02-05 00:48:10.064226 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 00:48:10.064232 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 00:48:10.064239 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 00:48:10.064246 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 00:48:10.064252 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 00:48:10.064258 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 00:48:10.064265 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 00:48:10.064269 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-05 00:48:10.064274 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 00:48:10.064280 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 00:48:10.064286 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 00:48:10.064292 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 00:48:10.064298 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 00:48:10.064304 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 00:48:10.064311 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-05 00:48:10.064318 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 00:48:10.064351 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 00:48:10.064358 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 00:48:10.064364 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 00:48:10.064369 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 00:48:10.064376 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-05 00:48:10.064382 | orchestrator | 2026-02-05 00:48:10.064389 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-05 00:48:10.064406 | orchestrator | Thursday 05 February 2026 00:46:02 +0000 (0:00:03.874) 0:00:05.200 ***** 2026-02-05 00:48:10.064412 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:48:10.064420 | orchestrator | 2026-02-05 00:48:10.064432 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-05 00:48:10.064439 | orchestrator | Thursday 05 February 2026 00:46:03 +0000 (0:00:01.262) 0:00:06.462 ***** 2026-02-05 00:48:10.064449 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.064457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.064470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.064477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.064484 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.064511 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.064517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.064525 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.064532 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.064543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.064551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.064558 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.064565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.064591 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.064598 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.064603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.064610 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.064616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.064623 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.064630 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.064636 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.064643 | orchestrator | 2026-02-05 00:48:10.064650 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-05 00:48:10.064656 | orchestrator | Thursday 05 February 2026 00:46:08 +0000 (0:00:05.128) 0:00:11.591 ***** 2026-02-05 00:48:10.064680 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:48:10.064690 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.064702 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.064712 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:48:10.064718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:48:10.064725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.064731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.064737 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:48:10.064743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:48:10.064749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.064762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.064769 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:48:10.064775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:48:10.064791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.064799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.064806 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:48:10.064812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:48:10.064820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.064827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.064833 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:48:10.064841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:48:10.064852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.064866 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.064874 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:48:10.064881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:48:10.064888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.064895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.064902 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:48:10.064908 | orchestrator | 2026-02-05 00:48:10.064914 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-05 00:48:10.064922 | orchestrator | Thursday 05 February 2026 00:46:10 +0000 (0:00:01.704) 0:00:13.296 ***** 2026-02-05 00:48:10.064929 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:48:10.064936 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.064948 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.064961 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:48:10.064973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:48:10.064980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.064987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.064995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:48:10.065001 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.065008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.065015 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:48:10.065021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:48:10.065037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.065046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.065053 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:48:10.065059 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:48:10.065066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:48:10.065073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.065080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.065086 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:48:10.065093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:48:10.065099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.065114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.065121 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:48:10.065128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-05 00:48:10.065138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.065146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.065152 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:48:10.065158 | orchestrator | 2026-02-05 00:48:10.065165 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-05 00:48:10.065171 | orchestrator | Thursday 05 February 2026 00:46:12 +0000 (0:00:02.032) 0:00:15.328 ***** 2026-02-05 00:48:10.065178 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:48:10.065184 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:48:10.065191 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:48:10.065197 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:48:10.065203 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:48:10.065210 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:48:10.065216 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:48:10.065222 | orchestrator | 2026-02-05 00:48:10.065228 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-05 00:48:10.065235 | orchestrator | Thursday 05 February 2026 00:46:13 +0000 (0:00:00.936) 0:00:16.264 ***** 2026-02-05 00:48:10.065241 | orchestrator | skipping: [testbed-manager] 2026-02-05 00:48:10.065247 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:48:10.065253 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:48:10.065259 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:48:10.065266 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:48:10.065272 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:48:10.065279 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:48:10.065285 | orchestrator | 2026-02-05 00:48:10.065292 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-05 00:48:10.065304 | orchestrator | Thursday 05 February 2026 00:46:15 +0000 (0:00:01.445) 0:00:17.710 ***** 2026-02-05 00:48:10.065311 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.065320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.065332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.065339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.065349 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.065356 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.065362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.065373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.065380 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.065386 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.065420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.065427 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.065434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.065441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.065448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.065459 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.065466 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.065477 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.065485 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.065492 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.065499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.065506 | orchestrator | 2026-02-05 00:48:10.065512 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-05 00:48:10.065519 | orchestrator | Thursday 05 February 2026 00:46:22 +0000 (0:00:06.917) 0:00:24.627 ***** 2026-02-05 00:48:10.065525 | orchestrator | [WARNING]: Skipped 2026-02-05 00:48:10.065533 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-05 00:48:10.065539 | orchestrator | to this access issue: 2026-02-05 00:48:10.065546 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-05 00:48:10.065552 | orchestrator | directory 2026-02-05 00:48:10.065559 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 00:48:10.065570 | orchestrator | 2026-02-05 00:48:10.065577 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-05 00:48:10.065583 | orchestrator | Thursday 05 February 2026 00:46:26 +0000 (0:00:04.355) 0:00:28.983 ***** 2026-02-05 00:48:10.065589 | orchestrator | [WARNING]: Skipped 2026-02-05 00:48:10.065596 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-05 00:48:10.065602 | orchestrator | to this access issue: 2026-02-05 00:48:10.065608 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-05 00:48:10.065615 | orchestrator | directory 2026-02-05 00:48:10.065622 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 00:48:10.065628 | orchestrator | 2026-02-05 00:48:10.065634 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-05 00:48:10.065640 | orchestrator | Thursday 05 February 2026 00:46:27 +0000 (0:00:01.360) 0:00:30.344 ***** 2026-02-05 00:48:10.065646 | orchestrator | [WARNING]: Skipped 2026-02-05 00:48:10.065652 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-05 00:48:10.065780 | orchestrator | to this access issue: 2026-02-05 00:48:10.065791 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-05 00:48:10.065798 | orchestrator | directory 2026-02-05 00:48:10.065805 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 00:48:10.065811 | orchestrator | 2026-02-05 00:48:10.065818 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-05 00:48:10.065825 | orchestrator | Thursday 05 February 2026 00:46:29 +0000 (0:00:01.264) 0:00:31.608 ***** 2026-02-05 00:48:10.065832 | orchestrator | [WARNING]: Skipped 2026-02-05 00:48:10.065839 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-05 00:48:10.065845 | orchestrator | to this access issue: 2026-02-05 00:48:10.065852 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-05 00:48:10.065859 | orchestrator | directory 2026-02-05 00:48:10.065865 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 00:48:10.065872 | orchestrator | 2026-02-05 00:48:10.065878 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-05 00:48:10.065885 | orchestrator | Thursday 05 February 2026 00:46:30 +0000 (0:00:01.650) 0:00:33.258 ***** 2026-02-05 00:48:10.065891 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:48:10.065898 | orchestrator | changed: [testbed-manager] 2026-02-05 00:48:10.065904 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:48:10.065911 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:48:10.065917 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:48:10.065924 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:48:10.065931 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:48:10.065938 | orchestrator | 2026-02-05 00:48:10.065944 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-05 00:48:10.065951 | orchestrator | Thursday 05 February 2026 00:46:35 +0000 (0:00:04.367) 0:00:37.626 ***** 2026-02-05 00:48:10.065958 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 00:48:10.065984 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 00:48:10.065992 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 00:48:10.066007 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 00:48:10.066064 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 00:48:10.066071 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 00:48:10.066077 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-05 00:48:10.066091 | orchestrator | 2026-02-05 00:48:10.066098 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-05 00:48:10.066105 | orchestrator | Thursday 05 February 2026 00:46:38 +0000 (0:00:03.176) 0:00:40.802 ***** 2026-02-05 00:48:10.066112 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:48:10.066119 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:48:10.066126 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:48:10.066132 | orchestrator | changed: [testbed-manager] 2026-02-05 00:48:10.066139 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:48:10.066145 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:48:10.066155 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:48:10.066163 | orchestrator | 2026-02-05 00:48:10.066170 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-05 00:48:10.066176 | orchestrator | Thursday 05 February 2026 00:46:41 +0000 (0:00:03.323) 0:00:44.126 ***** 2026-02-05 00:48:10.066184 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.066191 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.066198 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.066205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.066212 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.066227 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.066253 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066261 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.066277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.066284 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066291 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.066300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.066316 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.066324 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.066348 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.066355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:48:10.066361 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066368 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066374 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066384 | orchestrator | 2026-02-05 00:48:10.066431 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-05 00:48:10.066439 | orchestrator | Thursday 05 February 2026 00:46:43 +0000 (0:00:02.387) 0:00:46.514 ***** 2026-02-05 00:48:10.066446 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 00:48:10.066452 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 00:48:10.066459 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 00:48:10.066470 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 00:48:10.066476 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 00:48:10.066482 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 00:48:10.066488 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-05 00:48:10.066495 | orchestrator | 2026-02-05 00:48:10.066509 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-05 00:48:10.066516 | orchestrator | Thursday 05 February 2026 00:46:47 +0000 (0:00:03.189) 0:00:49.704 ***** 2026-02-05 00:48:10.066523 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 00:48:10.066529 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 00:48:10.066539 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 00:48:10.066546 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 00:48:10.066552 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 00:48:10.066558 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 00:48:10.066565 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-05 00:48:10.066571 | orchestrator | 2026-02-05 00:48:10.066577 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-05 00:48:10.066583 | orchestrator | Thursday 05 February 2026 00:46:49 +0000 (0:00:02.275) 0:00:51.980 ***** 2026-02-05 00:48:10.066590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.066597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.066604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.066612 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.066616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066624 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.066630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066634 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.066638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066642 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066649 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-05 00:48:10.066653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066661 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066671 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066680 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066686 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066690 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066698 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:48:10.066702 | orchestrator | 2026-02-05 00:48:10.066708 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-05 00:48:10.066712 | orchestrator | Thursday 05 February 2026 00:46:52 +0000 (0:00:03.586) 0:00:55.566 ***** 2026-02-05 00:48:10.066716 | orchestrator | changed: [testbed-manager] 2026-02-05 00:48:10.066720 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:48:10.066724 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:48:10.066728 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:48:10.066732 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:48:10.066735 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:48:10.066739 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:48:10.066743 | orchestrator | 2026-02-05 00:48:10.066747 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-05 00:48:10.066750 | orchestrator | Thursday 05 February 2026 00:46:54 +0000 (0:00:01.853) 0:00:57.420 ***** 2026-02-05 00:48:10.066754 | orchestrator | changed: [testbed-manager] 2026-02-05 00:48:10.066764 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:48:10.066768 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:48:10.066772 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:48:10.066775 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:48:10.066779 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:48:10.066783 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:48:10.066787 | orchestrator | 2026-02-05 00:48:10.066793 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 00:48:10.066797 | orchestrator | Thursday 05 February 2026 00:46:55 +0000 (0:00:01.147) 0:00:58.567 ***** 2026-02-05 00:48:10.066801 | orchestrator | 2026-02-05 00:48:10.066804 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 00:48:10.066808 | orchestrator | Thursday 05 February 2026 00:46:56 +0000 (0:00:00.065) 0:00:58.633 ***** 2026-02-05 00:48:10.066812 | orchestrator | 2026-02-05 00:48:10.066816 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 00:48:10.066820 | orchestrator | Thursday 05 February 2026 00:46:56 +0000 (0:00:00.059) 0:00:58.692 ***** 2026-02-05 00:48:10.066824 | orchestrator | 2026-02-05 00:48:10.066828 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 00:48:10.066835 | orchestrator | Thursday 05 February 2026 00:46:56 +0000 (0:00:00.060) 0:00:58.753 ***** 2026-02-05 00:48:10.066842 | orchestrator | 2026-02-05 00:48:10.066849 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 00:48:10.066855 | orchestrator | Thursday 05 February 2026 00:46:56 +0000 (0:00:00.173) 0:00:58.927 ***** 2026-02-05 00:48:10.066862 | orchestrator | 2026-02-05 00:48:10.066868 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 00:48:10.066875 | orchestrator | Thursday 05 February 2026 00:46:56 +0000 (0:00:00.059) 0:00:58.986 ***** 2026-02-05 00:48:10.066882 | orchestrator | 2026-02-05 00:48:10.066890 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-05 00:48:10.066897 | orchestrator | Thursday 05 February 2026 00:46:56 +0000 (0:00:00.060) 0:00:59.047 ***** 2026-02-05 00:48:10.066904 | orchestrator | 2026-02-05 00:48:10.066910 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-05 00:48:10.066917 | orchestrator | Thursday 05 February 2026 00:46:56 +0000 (0:00:00.082) 0:00:59.129 ***** 2026-02-05 00:48:10.066921 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:48:10.066925 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:48:10.066929 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:48:10.066932 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:48:10.066936 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:48:10.066941 | orchestrator | changed: [testbed-manager] 2026-02-05 00:48:10.066945 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:48:10.066950 | orchestrator | 2026-02-05 00:48:10.066954 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-05 00:48:10.066959 | orchestrator | Thursday 05 February 2026 00:47:25 +0000 (0:00:29.229) 0:01:28.358 ***** 2026-02-05 00:48:10.066963 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:48:10.066967 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:48:10.066972 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:48:10.066976 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:48:10.066981 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:48:10.066985 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:48:10.066990 | orchestrator | changed: [testbed-manager] 2026-02-05 00:48:10.066994 | orchestrator | 2026-02-05 00:48:10.066999 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-05 00:48:10.067003 | orchestrator | Thursday 05 February 2026 00:47:58 +0000 (0:00:32.508) 0:02:00.866 ***** 2026-02-05 00:48:10.067008 | orchestrator | ok: [testbed-manager] 2026-02-05 00:48:10.067012 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:48:10.067017 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:48:10.067021 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:48:10.067025 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:48:10.067030 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:48:10.067034 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:48:10.067047 | orchestrator | 2026-02-05 00:48:10.067055 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-05 00:48:10.067065 | orchestrator | Thursday 05 February 2026 00:48:00 +0000 (0:00:02.066) 0:02:02.933 ***** 2026-02-05 00:48:10.067071 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:48:10.067078 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:48:10.067084 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:48:10.067091 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:48:10.067098 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:48:10.067103 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:48:10.067108 | orchestrator | changed: [testbed-manager] 2026-02-05 00:48:10.067113 | orchestrator | 2026-02-05 00:48:10.067117 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:48:10.067122 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 00:48:10.067128 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 00:48:10.067141 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 00:48:10.067145 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 00:48:10.067150 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 00:48:10.067155 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 00:48:10.067159 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-05 00:48:10.067164 | orchestrator | 2026-02-05 00:48:10.067168 | orchestrator | 2026-02-05 00:48:10.067173 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:48:10.067180 | orchestrator | Thursday 05 February 2026 00:48:09 +0000 (0:00:09.004) 0:02:11.937 ***** 2026-02-05 00:48:10.067185 | orchestrator | =============================================================================== 2026-02-05 00:48:10.067189 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.51s 2026-02-05 00:48:10.067193 | orchestrator | common : Restart fluentd container ------------------------------------- 29.23s 2026-02-05 00:48:10.067198 | orchestrator | common : Restart cron container ----------------------------------------- 9.00s 2026-02-05 00:48:10.067202 | orchestrator | common : Copying over config.json files for services -------------------- 6.92s 2026-02-05 00:48:10.067206 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.13s 2026-02-05 00:48:10.067211 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.37s 2026-02-05 00:48:10.067216 | orchestrator | common : Find custom fluentd input config files ------------------------- 4.36s 2026-02-05 00:48:10.067220 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.87s 2026-02-05 00:48:10.067231 | orchestrator | common : Check common containers ---------------------------------------- 3.59s 2026-02-05 00:48:10.067235 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.32s 2026-02-05 00:48:10.067239 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.19s 2026-02-05 00:48:10.067244 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.18s 2026-02-05 00:48:10.067248 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.39s 2026-02-05 00:48:10.067252 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.28s 2026-02-05 00:48:10.067256 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.07s 2026-02-05 00:48:10.067261 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.03s 2026-02-05 00:48:10.067266 | orchestrator | common : Creating log volume -------------------------------------------- 1.85s 2026-02-05 00:48:10.067270 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.70s 2026-02-05 00:48:10.067275 | orchestrator | common : Find custom fluentd output config files ------------------------ 1.65s 2026-02-05 00:48:10.067279 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.45s 2026-02-05 00:48:10.067284 | orchestrator | 2026-02-05 00:48:10 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:10.067289 | orchestrator | 2026-02-05 00:48:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:13.104120 | orchestrator | 2026-02-05 00:48:13 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:13.104694 | orchestrator | 2026-02-05 00:48:13 | INFO  | Task cfa66c7b-7ce9-4ee2-aeaf-4297eee815db is in state STARTED 2026-02-05 00:48:13.105744 | orchestrator | 2026-02-05 00:48:13 | INFO  | Task 4f8050c8-3a21-4297-9ccd-adecc217d3ea is in state STARTED 2026-02-05 00:48:13.108493 | orchestrator | 2026-02-05 00:48:13 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:13.109496 | orchestrator | 2026-02-05 00:48:13 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:48:13.111770 | orchestrator | 2026-02-05 00:48:13 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:48:13.111829 | orchestrator | 2026-02-05 00:48:13 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:16.140440 | orchestrator | 2026-02-05 00:48:16 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:16.140802 | orchestrator | 2026-02-05 00:48:16 | INFO  | Task cfa66c7b-7ce9-4ee2-aeaf-4297eee815db is in state STARTED 2026-02-05 00:48:16.141553 | orchestrator | 2026-02-05 00:48:16 | INFO  | Task 4f8050c8-3a21-4297-9ccd-adecc217d3ea is in state STARTED 2026-02-05 00:48:16.145144 | orchestrator | 2026-02-05 00:48:16 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:16.145662 | orchestrator | 2026-02-05 00:48:16 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:48:16.146224 | orchestrator | 2026-02-05 00:48:16 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:48:16.146248 | orchestrator | 2026-02-05 00:48:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:19.173198 | orchestrator | 2026-02-05 00:48:19 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:19.173851 | orchestrator | 2026-02-05 00:48:19 | INFO  | Task cfa66c7b-7ce9-4ee2-aeaf-4297eee815db is in state STARTED 2026-02-05 00:48:19.174281 | orchestrator | 2026-02-05 00:48:19 | INFO  | Task 4f8050c8-3a21-4297-9ccd-adecc217d3ea is in state STARTED 2026-02-05 00:48:19.175818 | orchestrator | 2026-02-05 00:48:19 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:19.176195 | orchestrator | 2026-02-05 00:48:19 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:48:19.176707 | orchestrator | 2026-02-05 00:48:19 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:48:19.176723 | orchestrator | 2026-02-05 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:22.208055 | orchestrator | 2026-02-05 00:48:22 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:22.208598 | orchestrator | 2026-02-05 00:48:22 | INFO  | Task cfa66c7b-7ce9-4ee2-aeaf-4297eee815db is in state STARTED 2026-02-05 00:48:22.209359 | orchestrator | 2026-02-05 00:48:22 | INFO  | Task 4f8050c8-3a21-4297-9ccd-adecc217d3ea is in state STARTED 2026-02-05 00:48:22.210480 | orchestrator | 2026-02-05 00:48:22 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:22.211700 | orchestrator | 2026-02-05 00:48:22 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:48:22.213340 | orchestrator | 2026-02-05 00:48:22 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:48:22.213380 | orchestrator | 2026-02-05 00:48:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:25.263850 | orchestrator | 2026-02-05 00:48:25 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:25.265355 | orchestrator | 2026-02-05 00:48:25 | INFO  | Task cfa66c7b-7ce9-4ee2-aeaf-4297eee815db is in state STARTED 2026-02-05 00:48:25.266744 | orchestrator | 2026-02-05 00:48:25 | INFO  | Task 4f8050c8-3a21-4297-9ccd-adecc217d3ea is in state STARTED 2026-02-05 00:48:25.268450 | orchestrator | 2026-02-05 00:48:25 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:25.270193 | orchestrator | 2026-02-05 00:48:25 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:48:25.272374 | orchestrator | 2026-02-05 00:48:25 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:48:25.272456 | orchestrator | 2026-02-05 00:48:25 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:28.372653 | orchestrator | 2026-02-05 00:48:28 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:28.373125 | orchestrator | 2026-02-05 00:48:28 | INFO  | Task cfa66c7b-7ce9-4ee2-aeaf-4297eee815db is in state SUCCESS 2026-02-05 00:48:28.373803 | orchestrator | 2026-02-05 00:48:28 | INFO  | Task 4f8050c8-3a21-4297-9ccd-adecc217d3ea is in state STARTED 2026-02-05 00:48:28.374521 | orchestrator | 2026-02-05 00:48:28 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:48:28.375322 | orchestrator | 2026-02-05 00:48:28 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:28.379410 | orchestrator | 2026-02-05 00:48:28 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:48:28.380086 | orchestrator | 2026-02-05 00:48:28 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:48:28.380112 | orchestrator | 2026-02-05 00:48:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:31.444532 | orchestrator | 2026-02-05 00:48:31 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:31.447961 | orchestrator | 2026-02-05 00:48:31 | INFO  | Task 4f8050c8-3a21-4297-9ccd-adecc217d3ea is in state STARTED 2026-02-05 00:48:31.448905 | orchestrator | 2026-02-05 00:48:31 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:48:31.449716 | orchestrator | 2026-02-05 00:48:31 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:31.450891 | orchestrator | 2026-02-05 00:48:31 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:48:31.451483 | orchestrator | 2026-02-05 00:48:31 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:48:31.451499 | orchestrator | 2026-02-05 00:48:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:34.488925 | orchestrator | 2026-02-05 00:48:34 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:34.489526 | orchestrator | 2026-02-05 00:48:34 | INFO  | Task 4f8050c8-3a21-4297-9ccd-adecc217d3ea is in state STARTED 2026-02-05 00:48:34.490468 | orchestrator | 2026-02-05 00:48:34 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:48:34.491315 | orchestrator | 2026-02-05 00:48:34 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:34.492059 | orchestrator | 2026-02-05 00:48:34 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:48:34.495045 | orchestrator | 2026-02-05 00:48:34 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:48:34.495080 | orchestrator | 2026-02-05 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:37.546558 | orchestrator | 2026-02-05 00:48:37 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:37.546619 | orchestrator | 2026-02-05 00:48:37 | INFO  | Task 4f8050c8-3a21-4297-9ccd-adecc217d3ea is in state STARTED 2026-02-05 00:48:37.546625 | orchestrator | 2026-02-05 00:48:37 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:48:37.546629 | orchestrator | 2026-02-05 00:48:37 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:37.546633 | orchestrator | 2026-02-05 00:48:37 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:48:37.546637 | orchestrator | 2026-02-05 00:48:37 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:48:37.546641 | orchestrator | 2026-02-05 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:40.580498 | orchestrator | 2026-02-05 00:48:40 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:40.581131 | orchestrator | 2026-02-05 00:48:40 | INFO  | Task 4f8050c8-3a21-4297-9ccd-adecc217d3ea is in state SUCCESS 2026-02-05 00:48:40.582313 | orchestrator | 2026-02-05 00:48:40.582337 | orchestrator | 2026-02-05 00:48:40.582343 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:48:40.582349 | orchestrator | 2026-02-05 00:48:40.582354 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:48:40.582360 | orchestrator | Thursday 05 February 2026 00:48:16 +0000 (0:00:00.344) 0:00:00.344 ***** 2026-02-05 00:48:40.582365 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:48:40.582371 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:48:40.582376 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:48:40.582399 | orchestrator | 2026-02-05 00:48:40.582405 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:48:40.582411 | orchestrator | Thursday 05 February 2026 00:48:17 +0000 (0:00:00.414) 0:00:00.758 ***** 2026-02-05 00:48:40.582416 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-05 00:48:40.582422 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-05 00:48:40.582427 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-05 00:48:40.582432 | orchestrator | 2026-02-05 00:48:40.582437 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-05 00:48:40.582443 | orchestrator | 2026-02-05 00:48:40.582448 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-05 00:48:40.582453 | orchestrator | Thursday 05 February 2026 00:48:17 +0000 (0:00:00.599) 0:00:01.358 ***** 2026-02-05 00:48:40.582458 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:48:40.582464 | orchestrator | 2026-02-05 00:48:40.582469 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-05 00:48:40.582474 | orchestrator | Thursday 05 February 2026 00:48:18 +0000 (0:00:00.642) 0:00:02.000 ***** 2026-02-05 00:48:40.582514 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-05 00:48:40.582520 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-05 00:48:40.582524 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-05 00:48:40.582529 | orchestrator | 2026-02-05 00:48:40.582534 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-05 00:48:40.582539 | orchestrator | Thursday 05 February 2026 00:48:19 +0000 (0:00:00.689) 0:00:02.691 ***** 2026-02-05 00:48:40.582544 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-05 00:48:40.582549 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-05 00:48:40.582554 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-05 00:48:40.582559 | orchestrator | 2026-02-05 00:48:40.582565 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-05 00:48:40.582584 | orchestrator | Thursday 05 February 2026 00:48:20 +0000 (0:00:01.760) 0:00:04.451 ***** 2026-02-05 00:48:40.582589 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:48:40.582594 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:48:40.582599 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:48:40.582604 | orchestrator | 2026-02-05 00:48:40.582609 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-05 00:48:40.582615 | orchestrator | Thursday 05 February 2026 00:48:22 +0000 (0:00:01.793) 0:00:06.244 ***** 2026-02-05 00:48:40.582620 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:48:40.582625 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:48:40.582630 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:48:40.582635 | orchestrator | 2026-02-05 00:48:40.582640 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:48:40.582646 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:48:40.582656 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:48:40.582661 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:48:40.582666 | orchestrator | 2026-02-05 00:48:40.582671 | orchestrator | 2026-02-05 00:48:40.582676 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:48:40.582682 | orchestrator | Thursday 05 February 2026 00:48:26 +0000 (0:00:03.948) 0:00:10.193 ***** 2026-02-05 00:48:40.582687 | orchestrator | =============================================================================== 2026-02-05 00:48:40.582692 | orchestrator | memcached : Restart memcached container --------------------------------- 3.95s 2026-02-05 00:48:40.582697 | orchestrator | memcached : Check memcached container ----------------------------------- 1.79s 2026-02-05 00:48:40.582702 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.76s 2026-02-05 00:48:40.582707 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.69s 2026-02-05 00:48:40.582712 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.64s 2026-02-05 00:48:40.582717 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2026-02-05 00:48:40.582722 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2026-02-05 00:48:40.582728 | orchestrator | 2026-02-05 00:48:40.582733 | orchestrator | 2026-02-05 00:48:40.582738 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:48:40.582743 | orchestrator | 2026-02-05 00:48:40.582748 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:48:40.582753 | orchestrator | Thursday 05 February 2026 00:48:15 +0000 (0:00:00.284) 0:00:00.284 ***** 2026-02-05 00:48:40.582759 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:48:40.582764 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:48:40.582769 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:48:40.582775 | orchestrator | 2026-02-05 00:48:40.582780 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:48:40.582793 | orchestrator | Thursday 05 February 2026 00:48:16 +0000 (0:00:00.437) 0:00:00.722 ***** 2026-02-05 00:48:40.582798 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-05 00:48:40.582804 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-05 00:48:40.582809 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-05 00:48:40.582814 | orchestrator | 2026-02-05 00:48:40.582819 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-05 00:48:40.582824 | orchestrator | 2026-02-05 00:48:40.582829 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-05 00:48:40.582834 | orchestrator | Thursday 05 February 2026 00:48:16 +0000 (0:00:00.608) 0:00:01.330 ***** 2026-02-05 00:48:40.582843 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:48:40.582849 | orchestrator | 2026-02-05 00:48:40.582854 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-05 00:48:40.582859 | orchestrator | Thursday 05 February 2026 00:48:17 +0000 (0:00:00.719) 0:00:02.049 ***** 2026-02-05 00:48:40.582866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.582874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.582879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.582887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.582893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.582901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.582909 | orchestrator | 2026-02-05 00:48:40.582915 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-05 00:48:40.582920 | orchestrator | Thursday 05 February 2026 00:48:18 +0000 (0:00:01.284) 0:00:03.334 ***** 2026-02-05 00:48:40.582925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.582931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.582937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.582945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.582950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.583018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.583029 | orchestrator | 2026-02-05 00:48:40.583034 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-05 00:48:40.583040 | orchestrator | Thursday 05 February 2026 00:48:20 +0000 (0:00:02.254) 0:00:05.588 ***** 2026-02-05 00:48:40.583045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.583051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.583057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.583062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.583070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.583076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.583085 | orchestrator | 2026-02-05 00:48:40.583094 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-05 00:48:40.583100 | orchestrator | Thursday 05 February 2026 00:48:23 +0000 (0:00:02.634) 0:00:08.222 ***** 2026-02-05 00:48:40.583106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.583111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.583117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.583123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.583130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.583136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-05 00:48:40.583144 | orchestrator | 2026-02-05 00:48:40.583150 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-05 00:48:40.583156 | orchestrator | Thursday 05 February 2026 00:48:25 +0000 (0:00:01.958) 0:00:10.181 ***** 2026-02-05 00:48:40.583161 | orchestrator | 2026-02-05 00:48:40.583168 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-05 00:48:40.583177 | orchestrator | Thursday 05 February 2026 00:48:25 +0000 (0:00:00.065) 0:00:10.247 ***** 2026-02-05 00:48:40.583184 | orchestrator | 2026-02-05 00:48:40.583190 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-05 00:48:40.583195 | orchestrator | Thursday 05 February 2026 00:48:25 +0000 (0:00:00.067) 0:00:10.315 ***** 2026-02-05 00:48:40.583201 | orchestrator | 2026-02-05 00:48:40.583206 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-05 00:48:40.583212 | orchestrator | Thursday 05 February 2026 00:48:25 +0000 (0:00:00.097) 0:00:10.412 ***** 2026-02-05 00:48:40.583217 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:48:40.583223 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:48:40.583228 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:48:40.583234 | orchestrator | 2026-02-05 00:48:40.583239 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-05 00:48:40.583245 | orchestrator | Thursday 05 February 2026 00:48:34 +0000 (0:00:08.450) 0:00:18.863 ***** 2026-02-05 00:48:40.583250 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:48:40.583256 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:48:40.583261 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:48:40.583267 | orchestrator | 2026-02-05 00:48:40.583272 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:48:40.583278 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:48:40.583284 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:48:40.583290 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:48:40.583295 | orchestrator | 2026-02-05 00:48:40.583301 | orchestrator | 2026-02-05 00:48:40.583306 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:48:40.583312 | orchestrator | Thursday 05 February 2026 00:48:38 +0000 (0:00:04.742) 0:00:23.606 ***** 2026-02-05 00:48:40.583317 | orchestrator | =============================================================================== 2026-02-05 00:48:40.583323 | orchestrator | redis : Restart redis container ----------------------------------------- 8.45s 2026-02-05 00:48:40.583328 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.74s 2026-02-05 00:48:40.583333 | orchestrator | redis : Copying over redis config files --------------------------------- 2.63s 2026-02-05 00:48:40.583338 | orchestrator | redis : Copying over default config.json files -------------------------- 2.25s 2026-02-05 00:48:40.583343 | orchestrator | redis : Check redis containers ------------------------------------------ 1.96s 2026-02-05 00:48:40.583348 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.28s 2026-02-05 00:48:40.583354 | orchestrator | redis : include_tasks --------------------------------------------------- 0.72s 2026-02-05 00:48:40.583359 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-02-05 00:48:40.583364 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2026-02-05 00:48:40.583369 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.23s 2026-02-05 00:48:40.583378 | orchestrator | 2026-02-05 00:48:40 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:48:40.583927 | orchestrator | 2026-02-05 00:48:40 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:40.586339 | orchestrator | 2026-02-05 00:48:40 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:48:40.586360 | orchestrator | 2026-02-05 00:48:40 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:48:40.586364 | orchestrator | 2026-02-05 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:43.633515 | orchestrator | 2026-02-05 00:48:43 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:43.636376 | orchestrator | 2026-02-05 00:48:43 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:48:43.636429 | orchestrator | 2026-02-05 00:48:43 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:43.636437 | orchestrator | 2026-02-05 00:48:43 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:48:43.636767 | orchestrator | 2026-02-05 00:48:43 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:48:43.636782 | orchestrator | 2026-02-05 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:46.714406 | orchestrator | 2026-02-05 00:48:46 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:46.722105 | orchestrator | 2026-02-05 00:48:46 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:48:46.726490 | orchestrator | 2026-02-05 00:48:46 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:46.759662 | orchestrator | 2026-02-05 00:48:46 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:48:46.759745 | orchestrator | 2026-02-05 00:48:46 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:48:46.759752 | orchestrator | 2026-02-05 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:49.789761 | orchestrator | 2026-02-05 00:48:49 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:49.789913 | orchestrator | 2026-02-05 00:48:49 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:48:49.789932 | orchestrator | 2026-02-05 00:48:49 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:49.791046 | orchestrator | 2026-02-05 00:48:49 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:48:49.792473 | orchestrator | 2026-02-05 00:48:49 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:48:49.792503 | orchestrator | 2026-02-05 00:48:49 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:52.849219 | orchestrator | 2026-02-05 00:48:52 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:52.850183 | orchestrator | 2026-02-05 00:48:52 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:48:52.850966 | orchestrator | 2026-02-05 00:48:52 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:52.854910 | orchestrator | 2026-02-05 00:48:52 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:48:52.855923 | orchestrator | 2026-02-05 00:48:52 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:48:52.858620 | orchestrator | 2026-02-05 00:48:52 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:55.928461 | orchestrator | 2026-02-05 00:48:55 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:55.930168 | orchestrator | 2026-02-05 00:48:55 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:48:55.932723 | orchestrator | 2026-02-05 00:48:55 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:55.932774 | orchestrator | 2026-02-05 00:48:55 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:48:55.936070 | orchestrator | 2026-02-05 00:48:55 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:48:55.936126 | orchestrator | 2026-02-05 00:48:55 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:48:58.963316 | orchestrator | 2026-02-05 00:48:58 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:48:58.963757 | orchestrator | 2026-02-05 00:48:58 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:48:58.964143 | orchestrator | 2026-02-05 00:48:58 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:48:58.972822 | orchestrator | 2026-02-05 00:48:58 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:48:58.972903 | orchestrator | 2026-02-05 00:48:58 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:48:58.972914 | orchestrator | 2026-02-05 00:48:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:01.992795 | orchestrator | 2026-02-05 00:49:01 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:01.992877 | orchestrator | 2026-02-05 00:49:01 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:01.992993 | orchestrator | 2026-02-05 00:49:01 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:01.994247 | orchestrator | 2026-02-05 00:49:01 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:49:01.998961 | orchestrator | 2026-02-05 00:49:01 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:01.999039 | orchestrator | 2026-02-05 00:49:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:05.034955 | orchestrator | 2026-02-05 00:49:05 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:05.037601 | orchestrator | 2026-02-05 00:49:05 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:05.037656 | orchestrator | 2026-02-05 00:49:05 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:05.040602 | orchestrator | 2026-02-05 00:49:05 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:49:05.040649 | orchestrator | 2026-02-05 00:49:05 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:05.040656 | orchestrator | 2026-02-05 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:08.102471 | orchestrator | 2026-02-05 00:49:08 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:08.103838 | orchestrator | 2026-02-05 00:49:08 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:08.104978 | orchestrator | 2026-02-05 00:49:08 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:08.106260 | orchestrator | 2026-02-05 00:49:08 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:49:08.109765 | orchestrator | 2026-02-05 00:49:08 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:08.109842 | orchestrator | 2026-02-05 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:11.193015 | orchestrator | 2026-02-05 00:49:11 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:11.194007 | orchestrator | 2026-02-05 00:49:11 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:11.195102 | orchestrator | 2026-02-05 00:49:11 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:11.196031 | orchestrator | 2026-02-05 00:49:11 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:49:11.197154 | orchestrator | 2026-02-05 00:49:11 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:11.197198 | orchestrator | 2026-02-05 00:49:11 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:14.232062 | orchestrator | 2026-02-05 00:49:14 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:14.232625 | orchestrator | 2026-02-05 00:49:14 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:14.233429 | orchestrator | 2026-02-05 00:49:14 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:14.234218 | orchestrator | 2026-02-05 00:49:14 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:49:14.235496 | orchestrator | 2026-02-05 00:49:14 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:14.235602 | orchestrator | 2026-02-05 00:49:14 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:17.286749 | orchestrator | 2026-02-05 00:49:17 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:17.286809 | orchestrator | 2026-02-05 00:49:17 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:17.286815 | orchestrator | 2026-02-05 00:49:17 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:17.286820 | orchestrator | 2026-02-05 00:49:17 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:49:17.287070 | orchestrator | 2026-02-05 00:49:17 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:17.287273 | orchestrator | 2026-02-05 00:49:17 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:20.317595 | orchestrator | 2026-02-05 00:49:20 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:20.318311 | orchestrator | 2026-02-05 00:49:20 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:20.319951 | orchestrator | 2026-02-05 00:49:20 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:20.321122 | orchestrator | 2026-02-05 00:49:20 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state STARTED 2026-02-05 00:49:20.323230 | orchestrator | 2026-02-05 00:49:20 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:20.323285 | orchestrator | 2026-02-05 00:49:20 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:23.363773 | orchestrator | 2026-02-05 00:49:23 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:23.364212 | orchestrator | 2026-02-05 00:49:23 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:49:23.364720 | orchestrator | 2026-02-05 00:49:23 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:23.365302 | orchestrator | 2026-02-05 00:49:23 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:23.366188 | orchestrator | 2026-02-05 00:49:23 | INFO  | Task 22f7062e-49a4-4b2a-9091-c886255c3f53 is in state SUCCESS 2026-02-05 00:49:23.369346 | orchestrator | 2026-02-05 00:49:23.369376 | orchestrator | 2026-02-05 00:49:23.369384 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:49:23.369392 | orchestrator | 2026-02-05 00:49:23.369397 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:49:23.369402 | orchestrator | Thursday 05 February 2026 00:48:17 +0000 (0:00:00.290) 0:00:00.290 ***** 2026-02-05 00:49:23.369406 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:23.369412 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:23.369416 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:23.369421 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:49:23.369426 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:49:23.369430 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:49:23.369435 | orchestrator | 2026-02-05 00:49:23.369439 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:49:23.369445 | orchestrator | Thursday 05 February 2026 00:48:18 +0000 (0:00:00.836) 0:00:01.127 ***** 2026-02-05 00:49:23.369450 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 00:49:23.369454 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 00:49:23.369459 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 00:49:23.369464 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 00:49:23.369468 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 00:49:23.369473 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-05 00:49:23.369477 | orchestrator | 2026-02-05 00:49:23.369482 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-05 00:49:23.369487 | orchestrator | 2026-02-05 00:49:23.369491 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-05 00:49:23.369496 | orchestrator | Thursday 05 February 2026 00:48:18 +0000 (0:00:00.729) 0:00:01.857 ***** 2026-02-05 00:49:23.369501 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:49:23.369507 | orchestrator | 2026-02-05 00:49:23.369511 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-05 00:49:23.369516 | orchestrator | Thursday 05 February 2026 00:48:20 +0000 (0:00:01.410) 0:00:03.267 ***** 2026-02-05 00:49:23.369521 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-05 00:49:23.369525 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-05 00:49:23.369530 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-05 00:49:23.369534 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-05 00:49:23.369539 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-05 00:49:23.369544 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-05 00:49:23.369548 | orchestrator | 2026-02-05 00:49:23.369553 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-05 00:49:23.369568 | orchestrator | Thursday 05 February 2026 00:48:21 +0000 (0:00:01.432) 0:00:04.700 ***** 2026-02-05 00:49:23.369573 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-05 00:49:23.369577 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-05 00:49:23.369584 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-05 00:49:23.369599 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-05 00:49:23.369604 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-05 00:49:23.369608 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-05 00:49:23.369613 | orchestrator | 2026-02-05 00:49:23.369617 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-05 00:49:23.369622 | orchestrator | Thursday 05 February 2026 00:48:23 +0000 (0:00:01.617) 0:00:06.317 ***** 2026-02-05 00:49:23.369626 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-05 00:49:23.369631 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:23.369636 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-05 00:49:23.369641 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:23.369645 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-05 00:49:23.369650 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:23.369654 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-05 00:49:23.369659 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:23.369663 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-05 00:49:23.369668 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:23.369672 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-05 00:49:23.369677 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:23.369681 | orchestrator | 2026-02-05 00:49:23.369686 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-05 00:49:23.369690 | orchestrator | Thursday 05 February 2026 00:48:25 +0000 (0:00:01.690) 0:00:08.008 ***** 2026-02-05 00:49:23.369695 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:23.369699 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:23.369704 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:23.369709 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:23.369713 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:23.369718 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:23.369722 | orchestrator | 2026-02-05 00:49:23.369727 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-05 00:49:23.369732 | orchestrator | Thursday 05 February 2026 00:48:25 +0000 (0:00:00.724) 0:00:08.732 ***** 2026-02-05 00:49:23.369745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369767 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369772 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369798 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369805 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369810 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369818 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369823 | orchestrator | 2026-02-05 00:49:23.369828 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-05 00:49:23.369832 | orchestrator | Thursday 05 February 2026 00:48:27 +0000 (0:00:01.930) 0:00:10.663 ***** 2026-02-05 00:49:23.369837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369875 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369881 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369900 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369905 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369913 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369918 | orchestrator | 2026-02-05 00:49:23.369923 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-05 00:49:23.369928 | orchestrator | Thursday 05 February 2026 00:48:30 +0000 (0:00:03.229) 0:00:13.892 ***** 2026-02-05 00:49:23.369935 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:49:23.369942 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:49:23.369947 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:49:23.369952 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:23.369957 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:23.369963 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:23.369968 | orchestrator | 2026-02-05 00:49:23.369973 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-05 00:49:23.369978 | orchestrator | Thursday 05 February 2026 00:48:33 +0000 (0:00:02.549) 0:00:16.441 ***** 2026-02-05 00:49:23.369984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:49:23.369995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:49:23.370001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:49:23.370007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:49:23.370044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:49:23.370054 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:49:23.370060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:49:23.370067 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:49:23.370073 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:49:23.370079 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-05 00:49:23.370088 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:49:23.370096 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-05 00:49:23.370102 | orchestrator | 2026-02-05 00:49:23.370107 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 00:49:23.370112 | orchestrator | Thursday 05 February 2026 00:48:36 +0000 (0:00:03.535) 0:00:19.977 ***** 2026-02-05 00:49:23.370118 | orchestrator | 2026-02-05 00:49:23.370123 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 00:49:23.370128 | orchestrator | Thursday 05 February 2026 00:48:37 +0000 (0:00:00.284) 0:00:20.262 ***** 2026-02-05 00:49:23.370133 | orchestrator | 2026-02-05 00:49:23.370138 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 00:49:23.370144 | orchestrator | Thursday 05 February 2026 00:48:37 +0000 (0:00:00.208) 0:00:20.471 ***** 2026-02-05 00:49:23.370149 | orchestrator | 2026-02-05 00:49:23.370154 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 00:49:23.370160 | orchestrator | Thursday 05 February 2026 00:48:37 +0000 (0:00:00.280) 0:00:20.751 ***** 2026-02-05 00:49:23.370165 | orchestrator | 2026-02-05 00:49:23.370170 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 00:49:23.370175 | orchestrator | Thursday 05 February 2026 00:48:38 +0000 (0:00:00.286) 0:00:21.037 ***** 2026-02-05 00:49:23.370180 | orchestrator | 2026-02-05 00:49:23.370185 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-05 00:49:23.370190 | orchestrator | Thursday 05 February 2026 00:48:38 +0000 (0:00:00.276) 0:00:21.313 ***** 2026-02-05 00:49:23.370195 | orchestrator | 2026-02-05 00:49:23.370200 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-05 00:49:23.370206 | orchestrator | Thursday 05 February 2026 00:48:38 +0000 (0:00:00.256) 0:00:21.570 ***** 2026-02-05 00:49:23.370214 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:23.370220 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:49:23.370225 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:49:23.370231 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:23.370236 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:49:23.370241 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:23.370246 | orchestrator | 2026-02-05 00:49:23.370251 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-05 00:49:23.370255 | orchestrator | Thursday 05 February 2026 00:48:47 +0000 (0:00:09.123) 0:00:30.693 ***** 2026-02-05 00:49:23.370260 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:49:23.370265 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:49:23.370269 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:49:23.370274 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:49:23.370278 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:49:23.370283 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:49:23.370287 | orchestrator | 2026-02-05 00:49:23.370292 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-05 00:49:23.370300 | orchestrator | Thursday 05 February 2026 00:48:49 +0000 (0:00:01.752) 0:00:32.446 ***** 2026-02-05 00:49:23.370304 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:23.370309 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:23.370313 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:49:23.370318 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:49:23.370323 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:49:23.370336 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:23.370341 | orchestrator | 2026-02-05 00:49:23.370345 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-05 00:49:23.370350 | orchestrator | Thursday 05 February 2026 00:48:59 +0000 (0:00:09.613) 0:00:42.060 ***** 2026-02-05 00:49:23.370355 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-05 00:49:23.370359 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-05 00:49:23.370364 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-05 00:49:23.370369 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-05 00:49:23.370373 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-05 00:49:23.370380 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-05 00:49:23.370385 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-05 00:49:23.370390 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-05 00:49:23.370394 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-05 00:49:23.370399 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-05 00:49:23.370404 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-05 00:49:23.370408 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-05 00:49:23.370413 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 00:49:23.370417 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 00:49:23.370422 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 00:49:23.370427 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 00:49:23.370431 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 00:49:23.370436 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-05 00:49:23.370441 | orchestrator | 2026-02-05 00:49:23.370445 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-05 00:49:23.370450 | orchestrator | Thursday 05 February 2026 00:49:06 +0000 (0:00:07.934) 0:00:49.995 ***** 2026-02-05 00:49:23.370455 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-05 00:49:23.370459 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:23.370464 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-05 00:49:23.370468 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:23.370473 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-05 00:49:23.370481 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:23.370486 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-05 00:49:23.370491 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-05 00:49:23.370495 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-05 00:49:23.370500 | orchestrator | 2026-02-05 00:49:23.370504 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-05 00:49:23.370509 | orchestrator | Thursday 05 February 2026 00:49:09 +0000 (0:00:02.802) 0:00:52.797 ***** 2026-02-05 00:49:23.370514 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-05 00:49:23.370520 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:49:23.370525 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-05 00:49:23.370530 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:49:23.370534 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-05 00:49:23.370539 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:49:23.370544 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-05 00:49:23.370548 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-05 00:49:23.370553 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-05 00:49:23.370557 | orchestrator | 2026-02-05 00:49:23.370562 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-05 00:49:23.370566 | orchestrator | Thursday 05 February 2026 00:49:13 +0000 (0:00:03.745) 0:00:56.542 ***** 2026-02-05 00:49:23.370571 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:49:23.370576 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:49:23.370580 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:49:23.370585 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:49:23.370589 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:49:23.370594 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:49:23.370598 | orchestrator | 2026-02-05 00:49:23.370603 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:49:23.370608 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 00:49:23.370613 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 00:49:23.370617 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 00:49:23.370622 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 00:49:23.370627 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 00:49:23.370634 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 00:49:23.370638 | orchestrator | 2026-02-05 00:49:23.370643 | orchestrator | 2026-02-05 00:49:23.370648 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:49:23.370653 | orchestrator | Thursday 05 February 2026 00:49:21 +0000 (0:00:08.261) 0:01:04.804 ***** 2026-02-05 00:49:23.370660 | orchestrator | =============================================================================== 2026-02-05 00:49:23.370668 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.88s 2026-02-05 00:49:23.370676 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.12s 2026-02-05 00:49:23.370683 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.93s 2026-02-05 00:49:23.370690 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.75s 2026-02-05 00:49:23.370701 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.54s 2026-02-05 00:49:23.370708 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.23s 2026-02-05 00:49:23.370716 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.80s 2026-02-05 00:49:23.370724 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.55s 2026-02-05 00:49:23.370731 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.93s 2026-02-05 00:49:23.370736 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.75s 2026-02-05 00:49:23.370741 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.69s 2026-02-05 00:49:23.370745 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.62s 2026-02-05 00:49:23.370750 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.59s 2026-02-05 00:49:23.370754 | orchestrator | module-load : Load modules ---------------------------------------------- 1.43s 2026-02-05 00:49:23.370759 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.41s 2026-02-05 00:49:23.370763 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.84s 2026-02-05 00:49:23.370768 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2026-02-05 00:49:23.370772 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.72s 2026-02-05 00:49:23.370777 | orchestrator | 2026-02-05 00:49:23 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:23.370781 | orchestrator | 2026-02-05 00:49:23 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:26.427792 | orchestrator | 2026-02-05 00:49:26 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:26.427882 | orchestrator | 2026-02-05 00:49:26 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:49:26.430008 | orchestrator | 2026-02-05 00:49:26 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:26.430761 | orchestrator | 2026-02-05 00:49:26 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:26.431572 | orchestrator | 2026-02-05 00:49:26 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:26.431673 | orchestrator | 2026-02-05 00:49:26 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:29.472105 | orchestrator | 2026-02-05 00:49:29 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:29.472285 | orchestrator | 2026-02-05 00:49:29 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:49:29.473371 | orchestrator | 2026-02-05 00:49:29 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:29.473696 | orchestrator | 2026-02-05 00:49:29 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:29.474433 | orchestrator | 2026-02-05 00:49:29 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:29.474564 | orchestrator | 2026-02-05 00:49:29 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:32.515778 | orchestrator | 2026-02-05 00:49:32 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:32.516445 | orchestrator | 2026-02-05 00:49:32 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:49:32.517451 | orchestrator | 2026-02-05 00:49:32 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:32.518532 | orchestrator | 2026-02-05 00:49:32 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:32.519447 | orchestrator | 2026-02-05 00:49:32 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:32.519482 | orchestrator | 2026-02-05 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:35.544815 | orchestrator | 2026-02-05 00:49:35 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:35.545097 | orchestrator | 2026-02-05 00:49:35 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:49:35.545905 | orchestrator | 2026-02-05 00:49:35 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:35.546666 | orchestrator | 2026-02-05 00:49:35 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:35.547504 | orchestrator | 2026-02-05 00:49:35 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:35.547538 | orchestrator | 2026-02-05 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:38.579498 | orchestrator | 2026-02-05 00:49:38 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:38.580778 | orchestrator | 2026-02-05 00:49:38 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:49:38.583025 | orchestrator | 2026-02-05 00:49:38 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:38.583817 | orchestrator | 2026-02-05 00:49:38 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:38.584752 | orchestrator | 2026-02-05 00:49:38 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:38.584783 | orchestrator | 2026-02-05 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:41.622954 | orchestrator | 2026-02-05 00:49:41 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:41.624593 | orchestrator | 2026-02-05 00:49:41 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:49:41.626867 | orchestrator | 2026-02-05 00:49:41 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:41.629491 | orchestrator | 2026-02-05 00:49:41 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:41.630883 | orchestrator | 2026-02-05 00:49:41 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:41.630940 | orchestrator | 2026-02-05 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:44.675880 | orchestrator | 2026-02-05 00:49:44 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:44.676623 | orchestrator | 2026-02-05 00:49:44 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:49:44.677901 | orchestrator | 2026-02-05 00:49:44 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:44.678763 | orchestrator | 2026-02-05 00:49:44 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:44.679650 | orchestrator | 2026-02-05 00:49:44 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:44.680408 | orchestrator | 2026-02-05 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:47.711571 | orchestrator | 2026-02-05 00:49:47 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:47.711837 | orchestrator | 2026-02-05 00:49:47 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:49:47.714614 | orchestrator | 2026-02-05 00:49:47 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:47.714675 | orchestrator | 2026-02-05 00:49:47 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:47.715188 | orchestrator | 2026-02-05 00:49:47 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:47.715209 | orchestrator | 2026-02-05 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:50.749104 | orchestrator | 2026-02-05 00:49:50 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:50.752776 | orchestrator | 2026-02-05 00:49:50 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:49:50.755361 | orchestrator | 2026-02-05 00:49:50 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:50.758960 | orchestrator | 2026-02-05 00:49:50 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:50.759608 | orchestrator | 2026-02-05 00:49:50 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:50.759692 | orchestrator | 2026-02-05 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:53.804503 | orchestrator | 2026-02-05 00:49:53 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:53.804816 | orchestrator | 2026-02-05 00:49:53 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:49:53.805690 | orchestrator | 2026-02-05 00:49:53 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:53.806415 | orchestrator | 2026-02-05 00:49:53 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:53.807208 | orchestrator | 2026-02-05 00:49:53 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:53.807233 | orchestrator | 2026-02-05 00:49:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:56.846722 | orchestrator | 2026-02-05 00:49:56 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:56.846830 | orchestrator | 2026-02-05 00:49:56 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:49:56.847901 | orchestrator | 2026-02-05 00:49:56 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:56.848723 | orchestrator | 2026-02-05 00:49:56 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:56.849797 | orchestrator | 2026-02-05 00:49:56 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:56.849827 | orchestrator | 2026-02-05 00:49:56 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:49:59.886495 | orchestrator | 2026-02-05 00:49:59 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:49:59.888767 | orchestrator | 2026-02-05 00:49:59 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:49:59.889578 | orchestrator | 2026-02-05 00:49:59 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:49:59.890499 | orchestrator | 2026-02-05 00:49:59 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:49:59.891433 | orchestrator | 2026-02-05 00:49:59 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:49:59.891465 | orchestrator | 2026-02-05 00:49:59 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:02.920415 | orchestrator | 2026-02-05 00:50:02 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:02.921847 | orchestrator | 2026-02-05 00:50:02 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:02.923176 | orchestrator | 2026-02-05 00:50:02 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:50:02.923835 | orchestrator | 2026-02-05 00:50:02 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:50:02.925606 | orchestrator | 2026-02-05 00:50:02 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:02.926163 | orchestrator | 2026-02-05 00:50:02 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:05.973714 | orchestrator | 2026-02-05 00:50:05 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:05.974959 | orchestrator | 2026-02-05 00:50:05 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:05.975421 | orchestrator | 2026-02-05 00:50:05 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:50:05.975459 | orchestrator | 2026-02-05 00:50:05 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:50:05.975465 | orchestrator | 2026-02-05 00:50:05 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:05.975470 | orchestrator | 2026-02-05 00:50:05 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:09.029846 | orchestrator | 2026-02-05 00:50:09 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:09.030329 | orchestrator | 2026-02-05 00:50:09 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:09.031820 | orchestrator | 2026-02-05 00:50:09 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:50:09.033060 | orchestrator | 2026-02-05 00:50:09 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:50:09.037376 | orchestrator | 2026-02-05 00:50:09 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:09.037431 | orchestrator | 2026-02-05 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:12.065269 | orchestrator | 2026-02-05 00:50:12 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:12.065738 | orchestrator | 2026-02-05 00:50:12 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:12.067533 | orchestrator | 2026-02-05 00:50:12 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:50:12.068192 | orchestrator | 2026-02-05 00:50:12 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:50:12.068982 | orchestrator | 2026-02-05 00:50:12 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:12.068995 | orchestrator | 2026-02-05 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:15.130603 | orchestrator | 2026-02-05 00:50:15 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:15.130653 | orchestrator | 2026-02-05 00:50:15 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:15.186307 | orchestrator | 2026-02-05 00:50:15 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:50:15.186355 | orchestrator | 2026-02-05 00:50:15 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state STARTED 2026-02-05 00:50:15.186362 | orchestrator | 2026-02-05 00:50:15 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:15.186378 | orchestrator | 2026-02-05 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:18.124548 | orchestrator | 2026-02-05 00:50:18 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:18.125281 | orchestrator | 2026-02-05 00:50:18 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:18.125967 | orchestrator | 2026-02-05 00:50:18 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:50:18.127628 | orchestrator | 2026-02-05 00:50:18.127658 | orchestrator | 2026-02-05 00:50:18 | INFO  | Task 239a4d73-fcf4-4af5-80ed-6bbec79e7988 is in state SUCCESS 2026-02-05 00:50:18.129244 | orchestrator | 2026-02-05 00:50:18.129268 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-05 00:50:18.129273 | orchestrator | 2026-02-05 00:50:18.129277 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-05 00:50:18.129282 | orchestrator | Thursday 05 February 2026 00:45:57 +0000 (0:00:00.116) 0:00:00.116 ***** 2026-02-05 00:50:18.129285 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:50:18.129293 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:50:18.129297 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:50:18.129301 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.129304 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.129308 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.129312 | orchestrator | 2026-02-05 00:50:18.129316 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-05 00:50:18.129319 | orchestrator | Thursday 05 February 2026 00:45:57 +0000 (0:00:00.559) 0:00:00.675 ***** 2026-02-05 00:50:18.129323 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:50:18.129328 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:50:18.129331 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:50:18.129335 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.129339 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.129343 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.129346 | orchestrator | 2026-02-05 00:50:18.129350 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-05 00:50:18.129354 | orchestrator | Thursday 05 February 2026 00:45:58 +0000 (0:00:00.563) 0:00:01.239 ***** 2026-02-05 00:50:18.129358 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:50:18.129361 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:50:18.129365 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:50:18.129369 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.129373 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.129376 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.129380 | orchestrator | 2026-02-05 00:50:18.129384 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-05 00:50:18.129388 | orchestrator | Thursday 05 February 2026 00:45:59 +0000 (0:00:00.534) 0:00:01.773 ***** 2026-02-05 00:50:18.129391 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:50:18.129395 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:50:18.129399 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:50:18.129403 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:18.129407 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:18.129410 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:18.129414 | orchestrator | 2026-02-05 00:50:18.129418 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-05 00:50:18.129422 | orchestrator | Thursday 05 February 2026 00:46:01 +0000 (0:00:02.554) 0:00:04.328 ***** 2026-02-05 00:50:18.129425 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:50:18.129429 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:50:18.129433 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:50:18.129437 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:18.129440 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:18.129453 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:18.129457 | orchestrator | 2026-02-05 00:50:18.129461 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-05 00:50:18.129465 | orchestrator | Thursday 05 February 2026 00:46:02 +0000 (0:00:01.318) 0:00:05.647 ***** 2026-02-05 00:50:18.129478 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:50:18.129482 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:50:18.129486 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:50:18.129489 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:18.129493 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:18.129497 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:18.129501 | orchestrator | 2026-02-05 00:50:18.129504 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-05 00:50:18.129508 | orchestrator | Thursday 05 February 2026 00:46:03 +0000 (0:00:00.959) 0:00:06.607 ***** 2026-02-05 00:50:18.129512 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:50:18.129516 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:50:18.129520 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:50:18.129523 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.129527 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.129531 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.129535 | orchestrator | 2026-02-05 00:50:18.129538 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-05 00:50:18.129542 | orchestrator | Thursday 05 February 2026 00:46:04 +0000 (0:00:00.565) 0:00:07.173 ***** 2026-02-05 00:50:18.129546 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:50:18.129550 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:50:18.129553 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:50:18.129557 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.129561 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.129565 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.129568 | orchestrator | 2026-02-05 00:50:18.129572 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-05 00:50:18.129576 | orchestrator | Thursday 05 February 2026 00:46:05 +0000 (0:00:00.860) 0:00:08.034 ***** 2026-02-05 00:50:18.129580 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 00:50:18.129584 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 00:50:18.129587 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:50:18.129591 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 00:50:18.129595 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 00:50:18.129599 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:50:18.129603 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 00:50:18.129606 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 00:50:18.129610 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:50:18.129614 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 00:50:18.129623 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 00:50:18.129636 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.129640 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 00:50:18.129644 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 00:50:18.129648 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.129653 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 00:50:18.129657 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 00:50:18.129661 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.129665 | orchestrator | 2026-02-05 00:50:18.129671 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-05 00:50:18.129675 | orchestrator | Thursday 05 February 2026 00:46:05 +0000 (0:00:00.589) 0:00:08.623 ***** 2026-02-05 00:50:18.129679 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:50:18.129683 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:50:18.129686 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:50:18.129690 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.129694 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.129698 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.129701 | orchestrator | 2026-02-05 00:50:18.129705 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-05 00:50:18.129709 | orchestrator | Thursday 05 February 2026 00:46:07 +0000 (0:00:01.337) 0:00:09.961 ***** 2026-02-05 00:50:18.129713 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:50:18.129717 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:50:18.129720 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:50:18.129724 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.129728 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.129732 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.129735 | orchestrator | 2026-02-05 00:50:18.129739 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-05 00:50:18.129743 | orchestrator | Thursday 05 February 2026 00:46:08 +0000 (0:00:00.841) 0:00:10.803 ***** 2026-02-05 00:50:18.129747 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:18.129750 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:50:18.129754 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:50:18.129758 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:18.129762 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:18.129765 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:50:18.129769 | orchestrator | 2026-02-05 00:50:18.129773 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-05 00:50:18.129777 | orchestrator | Thursday 05 February 2026 00:46:13 +0000 (0:00:05.300) 0:00:16.103 ***** 2026-02-05 00:50:18.129780 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:50:18.129784 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:50:18.129788 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:50:18.129792 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.129795 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.129799 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.129803 | orchestrator | 2026-02-05 00:50:18.129807 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-05 00:50:18.129810 | orchestrator | Thursday 05 February 2026 00:46:14 +0000 (0:00:01.221) 0:00:17.324 ***** 2026-02-05 00:50:18.129814 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:50:18.129818 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:50:18.129821 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:50:18.129825 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.129829 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.129832 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.129836 | orchestrator | 2026-02-05 00:50:18.129840 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-05 00:50:18.129845 | orchestrator | Thursday 05 February 2026 00:46:16 +0000 (0:00:01.882) 0:00:19.207 ***** 2026-02-05 00:50:18.129850 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:50:18.129854 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:50:18.129858 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:50:18.129863 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.129867 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.129871 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.129876 | orchestrator | 2026-02-05 00:50:18.129880 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-05 00:50:18.129885 | orchestrator | Thursday 05 February 2026 00:46:18 +0000 (0:00:01.760) 0:00:20.967 ***** 2026-02-05 00:50:18.129891 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-05 00:50:18.129896 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-05 00:50:18.129900 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:50:18.129905 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-05 00:50:18.129909 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-05 00:50:18.129914 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:50:18.129918 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-05 00:50:18.129923 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-05 00:50:18.129927 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:50:18.129932 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-05 00:50:18.129936 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-05 00:50:18.129940 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.129945 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-05 00:50:18.129949 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-05 00:50:18.129954 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.129958 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-05 00:50:18.129963 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-05 00:50:18.129967 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.129971 | orchestrator | 2026-02-05 00:50:18.129976 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-05 00:50:18.129982 | orchestrator | Thursday 05 February 2026 00:46:19 +0000 (0:00:01.247) 0:00:22.215 ***** 2026-02-05 00:50:18.129987 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:50:18.129991 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:50:18.129996 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:50:18.130000 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.130004 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.130009 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.130110 | orchestrator | 2026-02-05 00:50:18.130124 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-05 00:50:18.130133 | orchestrator | Thursday 05 February 2026 00:46:20 +0000 (0:00:00.671) 0:00:22.887 ***** 2026-02-05 00:50:18.130139 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:50:18.130145 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:50:18.130151 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:50:18.130157 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.130164 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.130170 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.130177 | orchestrator | 2026-02-05 00:50:18.130184 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-05 00:50:18.130190 | orchestrator | 2026-02-05 00:50:18.130207 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-05 00:50:18.130215 | orchestrator | Thursday 05 February 2026 00:46:21 +0000 (0:00:01.453) 0:00:24.341 ***** 2026-02-05 00:50:18.130221 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.130227 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.130233 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.130238 | orchestrator | 2026-02-05 00:50:18.130242 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-05 00:50:18.130246 | orchestrator | Thursday 05 February 2026 00:46:24 +0000 (0:00:03.274) 0:00:27.615 ***** 2026-02-05 00:50:18.130250 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.130254 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.130258 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.130261 | orchestrator | 2026-02-05 00:50:18.130265 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-05 00:50:18.130269 | orchestrator | Thursday 05 February 2026 00:46:27 +0000 (0:00:02.626) 0:00:30.241 ***** 2026-02-05 00:50:18.130277 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.130280 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.130284 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.130288 | orchestrator | 2026-02-05 00:50:18.130292 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-05 00:50:18.130296 | orchestrator | Thursday 05 February 2026 00:46:28 +0000 (0:00:00.886) 0:00:31.128 ***** 2026-02-05 00:50:18.130299 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.130303 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.130307 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.130311 | orchestrator | 2026-02-05 00:50:18.130315 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-05 00:50:18.130318 | orchestrator | Thursday 05 February 2026 00:46:29 +0000 (0:00:00.882) 0:00:32.010 ***** 2026-02-05 00:50:18.130322 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.130326 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.130330 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.130334 | orchestrator | 2026-02-05 00:50:18.130337 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-05 00:50:18.130341 | orchestrator | Thursday 05 February 2026 00:46:30 +0000 (0:00:00.918) 0:00:32.929 ***** 2026-02-05 00:50:18.130345 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:18.130349 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:18.130352 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:18.130356 | orchestrator | 2026-02-05 00:50:18.130360 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-05 00:50:18.130364 | orchestrator | Thursday 05 February 2026 00:46:31 +0000 (0:00:01.349) 0:00:34.278 ***** 2026-02-05 00:50:18.130368 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:18.130371 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:18.130375 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:18.130379 | orchestrator | 2026-02-05 00:50:18.130383 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-05 00:50:18.130387 | orchestrator | Thursday 05 February 2026 00:46:33 +0000 (0:00:01.900) 0:00:36.180 ***** 2026-02-05 00:50:18.130390 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:50:18.130394 | orchestrator | 2026-02-05 00:50:18.130398 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-05 00:50:18.130402 | orchestrator | Thursday 05 February 2026 00:46:34 +0000 (0:00:00.605) 0:00:36.786 ***** 2026-02-05 00:50:18.130406 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.130409 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.130414 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.130420 | orchestrator | 2026-02-05 00:50:18.130426 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-05 00:50:18.130433 | orchestrator | Thursday 05 February 2026 00:46:36 +0000 (0:00:02.091) 0:00:38.877 ***** 2026-02-05 00:50:18.130439 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.130445 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.130450 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:18.130456 | orchestrator | 2026-02-05 00:50:18.130462 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-05 00:50:18.130468 | orchestrator | Thursday 05 February 2026 00:46:37 +0000 (0:00:00.961) 0:00:39.839 ***** 2026-02-05 00:50:18.130475 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.130481 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.130484 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:18.130488 | orchestrator | 2026-02-05 00:50:18.130492 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-05 00:50:18.130496 | orchestrator | Thursday 05 February 2026 00:46:38 +0000 (0:00:01.114) 0:00:40.953 ***** 2026-02-05 00:50:18.130499 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.130503 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.130512 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:18.130515 | orchestrator | 2026-02-05 00:50:18.130519 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-05 00:50:18.130528 | orchestrator | Thursday 05 February 2026 00:46:39 +0000 (0:00:01.635) 0:00:42.589 ***** 2026-02-05 00:50:18.130533 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.130536 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.130540 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.130544 | orchestrator | 2026-02-05 00:50:18.130548 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-05 00:50:18.130554 | orchestrator | Thursday 05 February 2026 00:46:40 +0000 (0:00:00.622) 0:00:43.211 ***** 2026-02-05 00:50:18.130558 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.130562 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.130566 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.130570 | orchestrator | 2026-02-05 00:50:18.130573 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-05 00:50:18.130577 | orchestrator | Thursday 05 February 2026 00:46:40 +0000 (0:00:00.283) 0:00:43.494 ***** 2026-02-05 00:50:18.130581 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:18.130585 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:18.130588 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:18.130592 | orchestrator | 2026-02-05 00:50:18.130596 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-05 00:50:18.130600 | orchestrator | Thursday 05 February 2026 00:46:42 +0000 (0:00:01.541) 0:00:45.036 ***** 2026-02-05 00:50:18.130603 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.130607 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.130611 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.130615 | orchestrator | 2026-02-05 00:50:18.130618 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-05 00:50:18.130622 | orchestrator | Thursday 05 February 2026 00:46:45 +0000 (0:00:02.756) 0:00:47.793 ***** 2026-02-05 00:50:18.130626 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.130630 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.130633 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.130637 | orchestrator | 2026-02-05 00:50:18.130641 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-05 00:50:18.130645 | orchestrator | Thursday 05 February 2026 00:46:45 +0000 (0:00:00.391) 0:00:48.184 ***** 2026-02-05 00:50:18.130649 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-05 00:50:18.130654 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-05 00:50:18.130658 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-05 00:50:18.130662 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-05 00:50:18.130665 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-05 00:50:18.130669 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-05 00:50:18.130673 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-05 00:50:18.130677 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-05 00:50:18.130681 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-05 00:50:18.130688 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-05 00:50:18.130692 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-05 00:50:18.130696 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-05 00:50:18.130699 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.130703 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.130707 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.130711 | orchestrator | 2026-02-05 00:50:18.130715 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-05 00:50:18.130719 | orchestrator | Thursday 05 February 2026 00:47:29 +0000 (0:00:43.658) 0:01:31.842 ***** 2026-02-05 00:50:18.130722 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.130726 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.130730 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.130734 | orchestrator | 2026-02-05 00:50:18.130737 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-05 00:50:18.130741 | orchestrator | Thursday 05 February 2026 00:47:29 +0000 (0:00:00.475) 0:01:32.318 ***** 2026-02-05 00:50:18.130745 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:18.130749 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:18.130752 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:18.130756 | orchestrator | 2026-02-05 00:50:18.130760 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-05 00:50:18.130764 | orchestrator | Thursday 05 February 2026 00:47:30 +0000 (0:00:00.985) 0:01:33.304 ***** 2026-02-05 00:50:18.130767 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:18.130771 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:18.130775 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:18.130779 | orchestrator | 2026-02-05 00:50:18.130785 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-05 00:50:18.130789 | orchestrator | Thursday 05 February 2026 00:47:31 +0000 (0:00:01.453) 0:01:34.757 ***** 2026-02-05 00:50:18.130793 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:18.130796 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:18.130800 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:18.130804 | orchestrator | 2026-02-05 00:50:18.130810 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-05 00:50:18.130814 | orchestrator | Thursday 05 February 2026 00:47:57 +0000 (0:00:25.464) 0:02:00.222 ***** 2026-02-05 00:50:18.130818 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.130822 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.130825 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.130829 | orchestrator | 2026-02-05 00:50:18.130833 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-05 00:50:18.130837 | orchestrator | Thursday 05 February 2026 00:47:58 +0000 (0:00:00.565) 0:02:00.788 ***** 2026-02-05 00:50:18.130840 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.130844 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.130848 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.130852 | orchestrator | 2026-02-05 00:50:18.130855 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-05 00:50:18.130859 | orchestrator | Thursday 05 February 2026 00:47:58 +0000 (0:00:00.558) 0:02:01.346 ***** 2026-02-05 00:50:18.130863 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:18.130867 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:18.130871 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:18.130874 | orchestrator | 2026-02-05 00:50:18.130878 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-05 00:50:18.130882 | orchestrator | Thursday 05 February 2026 00:47:59 +0000 (0:00:00.635) 0:02:01.981 ***** 2026-02-05 00:50:18.130889 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.130893 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.130897 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.130900 | orchestrator | 2026-02-05 00:50:18.130904 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-05 00:50:18.130908 | orchestrator | Thursday 05 February 2026 00:48:00 +0000 (0:00:00.785) 0:02:02.767 ***** 2026-02-05 00:50:18.130912 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.130916 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.130919 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.130923 | orchestrator | 2026-02-05 00:50:18.130927 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-05 00:50:18.130931 | orchestrator | Thursday 05 February 2026 00:48:00 +0000 (0:00:00.256) 0:02:03.023 ***** 2026-02-05 00:50:18.130935 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:18.130938 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:18.130942 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:18.130946 | orchestrator | 2026-02-05 00:50:18.130950 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-05 00:50:18.130954 | orchestrator | Thursday 05 February 2026 00:48:00 +0000 (0:00:00.662) 0:02:03.685 ***** 2026-02-05 00:50:18.130957 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:18.130961 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:18.130965 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:18.130969 | orchestrator | 2026-02-05 00:50:18.130973 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-05 00:50:18.130976 | orchestrator | Thursday 05 February 2026 00:48:01 +0000 (0:00:00.686) 0:02:04.372 ***** 2026-02-05 00:50:18.130980 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:18.130984 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:18.130988 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:18.130991 | orchestrator | 2026-02-05 00:50:18.130995 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-05 00:50:18.130999 | orchestrator | Thursday 05 February 2026 00:48:02 +0000 (0:00:01.083) 0:02:05.455 ***** 2026-02-05 00:50:18.131003 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:18.131007 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:18.131010 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:18.131014 | orchestrator | 2026-02-05 00:50:18.131018 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-05 00:50:18.131022 | orchestrator | Thursday 05 February 2026 00:48:03 +0000 (0:00:00.971) 0:02:06.426 ***** 2026-02-05 00:50:18.131025 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.131029 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.131033 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.131036 | orchestrator | 2026-02-05 00:50:18.131040 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-05 00:50:18.131044 | orchestrator | Thursday 05 February 2026 00:48:03 +0000 (0:00:00.274) 0:02:06.702 ***** 2026-02-05 00:50:18.131048 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.131052 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.131055 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.131059 | orchestrator | 2026-02-05 00:50:18.131063 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-05 00:50:18.131067 | orchestrator | Thursday 05 February 2026 00:48:04 +0000 (0:00:00.244) 0:02:06.946 ***** 2026-02-05 00:50:18.131070 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.131074 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.131078 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.131082 | orchestrator | 2026-02-05 00:50:18.131085 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-05 00:50:18.131089 | orchestrator | Thursday 05 February 2026 00:48:04 +0000 (0:00:00.591) 0:02:07.537 ***** 2026-02-05 00:50:18.131093 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.131097 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.131106 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.131110 | orchestrator | 2026-02-05 00:50:18.131114 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-05 00:50:18.131117 | orchestrator | Thursday 05 February 2026 00:48:05 +0000 (0:00:00.768) 0:02:08.305 ***** 2026-02-05 00:50:18.131121 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-05 00:50:18.131128 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-05 00:50:18.131132 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-05 00:50:18.131136 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-05 00:50:18.131142 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-05 00:50:18.131146 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-05 00:50:18.131149 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-05 00:50:18.131153 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-05 00:50:18.131157 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-05 00:50:18.131161 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-05 00:50:18.131165 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-05 00:50:18.131169 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-05 00:50:18.131173 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-05 00:50:18.131176 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-05 00:50:18.131180 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-05 00:50:18.131184 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-05 00:50:18.131188 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-05 00:50:18.131192 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-05 00:50:18.131195 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-05 00:50:18.131211 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-05 00:50:18.131218 | orchestrator | 2026-02-05 00:50:18.131224 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-05 00:50:18.131230 | orchestrator | 2026-02-05 00:50:18.131237 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-05 00:50:18.131244 | orchestrator | Thursday 05 February 2026 00:48:08 +0000 (0:00:02.768) 0:02:11.074 ***** 2026-02-05 00:50:18.131250 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:50:18.131255 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:50:18.131259 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:50:18.131263 | orchestrator | 2026-02-05 00:50:18.131267 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-05 00:50:18.131270 | orchestrator | Thursday 05 February 2026 00:48:08 +0000 (0:00:00.310) 0:02:11.384 ***** 2026-02-05 00:50:18.131274 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:50:18.131278 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:50:18.131282 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:50:18.131285 | orchestrator | 2026-02-05 00:50:18.131289 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-05 00:50:18.131296 | orchestrator | Thursday 05 February 2026 00:48:09 +0000 (0:00:00.795) 0:02:12.179 ***** 2026-02-05 00:50:18.131300 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:50:18.131304 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:50:18.131308 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:50:18.131311 | orchestrator | 2026-02-05 00:50:18.131315 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-05 00:50:18.131319 | orchestrator | Thursday 05 February 2026 00:48:09 +0000 (0:00:00.331) 0:02:12.511 ***** 2026-02-05 00:50:18.131322 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:50:18.131326 | orchestrator | 2026-02-05 00:50:18.131330 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-05 00:50:18.131334 | orchestrator | Thursday 05 February 2026 00:48:10 +0000 (0:00:00.503) 0:02:13.015 ***** 2026-02-05 00:50:18.131338 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:50:18.131342 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:50:18.131345 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:50:18.131349 | orchestrator | 2026-02-05 00:50:18.131353 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-05 00:50:18.131356 | orchestrator | Thursday 05 February 2026 00:48:10 +0000 (0:00:00.579) 0:02:13.594 ***** 2026-02-05 00:50:18.131360 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:50:18.131364 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:50:18.131368 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:50:18.131371 | orchestrator | 2026-02-05 00:50:18.131375 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-05 00:50:18.131379 | orchestrator | Thursday 05 February 2026 00:48:11 +0000 (0:00:00.289) 0:02:13.884 ***** 2026-02-05 00:50:18.131383 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:50:18.131386 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:50:18.131390 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:50:18.131394 | orchestrator | 2026-02-05 00:50:18.131398 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-05 00:50:18.131401 | orchestrator | Thursday 05 February 2026 00:48:11 +0000 (0:00:00.293) 0:02:14.177 ***** 2026-02-05 00:50:18.131405 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:50:18.131409 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:50:18.131412 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:50:18.131416 | orchestrator | 2026-02-05 00:50:18.131422 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-05 00:50:18.131426 | orchestrator | Thursday 05 February 2026 00:48:12 +0000 (0:00:00.650) 0:02:14.827 ***** 2026-02-05 00:50:18.131430 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:50:18.131434 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:50:18.131437 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:50:18.131441 | orchestrator | 2026-02-05 00:50:18.131447 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-05 00:50:18.131451 | orchestrator | Thursday 05 February 2026 00:48:13 +0000 (0:00:01.419) 0:02:16.247 ***** 2026-02-05 00:50:18.131455 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:50:18.131459 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:50:18.131462 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:50:18.131466 | orchestrator | 2026-02-05 00:50:18.131470 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-05 00:50:18.131473 | orchestrator | Thursday 05 February 2026 00:48:14 +0000 (0:00:01.141) 0:02:17.388 ***** 2026-02-05 00:50:18.131477 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:50:18.131481 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:50:18.131485 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:50:18.131488 | orchestrator | 2026-02-05 00:50:18.131492 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-05 00:50:18.131496 | orchestrator | 2026-02-05 00:50:18.131500 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-05 00:50:18.131506 | orchestrator | Thursday 05 February 2026 00:48:23 +0000 (0:00:09.183) 0:02:26.572 ***** 2026-02-05 00:50:18.131510 | orchestrator | ok: [testbed-manager] 2026-02-05 00:50:18.131514 | orchestrator | 2026-02-05 00:50:18.131518 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-05 00:50:18.131521 | orchestrator | Thursday 05 February 2026 00:48:24 +0000 (0:00:01.082) 0:02:27.654 ***** 2026-02-05 00:50:18.131525 | orchestrator | changed: [testbed-manager] 2026-02-05 00:50:18.131529 | orchestrator | 2026-02-05 00:50:18.131533 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-05 00:50:18.131536 | orchestrator | Thursday 05 February 2026 00:48:25 +0000 (0:00:00.430) 0:02:28.085 ***** 2026-02-05 00:50:18.131540 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-05 00:50:18.131544 | orchestrator | 2026-02-05 00:50:18.131547 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-05 00:50:18.131551 | orchestrator | Thursday 05 February 2026 00:48:25 +0000 (0:00:00.593) 0:02:28.678 ***** 2026-02-05 00:50:18.131555 | orchestrator | changed: [testbed-manager] 2026-02-05 00:50:18.131559 | orchestrator | 2026-02-05 00:50:18.131562 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-05 00:50:18.131566 | orchestrator | Thursday 05 February 2026 00:48:26 +0000 (0:00:01.013) 0:02:29.691 ***** 2026-02-05 00:50:18.131570 | orchestrator | changed: [testbed-manager] 2026-02-05 00:50:18.131574 | orchestrator | 2026-02-05 00:50:18.131577 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-05 00:50:18.131581 | orchestrator | Thursday 05 February 2026 00:48:27 +0000 (0:00:00.596) 0:02:30.288 ***** 2026-02-05 00:50:18.131585 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-05 00:50:18.131589 | orchestrator | 2026-02-05 00:50:18.131592 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-05 00:50:18.131596 | orchestrator | Thursday 05 February 2026 00:48:29 +0000 (0:00:01.673) 0:02:31.961 ***** 2026-02-05 00:50:18.131600 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-05 00:50:18.131604 | orchestrator | 2026-02-05 00:50:18.131607 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-05 00:50:18.131611 | orchestrator | Thursday 05 February 2026 00:48:30 +0000 (0:00:00.895) 0:02:32.857 ***** 2026-02-05 00:50:18.131615 | orchestrator | changed: [testbed-manager] 2026-02-05 00:50:18.131618 | orchestrator | 2026-02-05 00:50:18.131622 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-05 00:50:18.131626 | orchestrator | Thursday 05 February 2026 00:48:30 +0000 (0:00:00.482) 0:02:33.339 ***** 2026-02-05 00:50:18.131630 | orchestrator | changed: [testbed-manager] 2026-02-05 00:50:18.131634 | orchestrator | 2026-02-05 00:50:18.131637 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-05 00:50:18.131641 | orchestrator | 2026-02-05 00:50:18.131645 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-05 00:50:18.131649 | orchestrator | Thursday 05 February 2026 00:48:31 +0000 (0:00:00.565) 0:02:33.905 ***** 2026-02-05 00:50:18.131652 | orchestrator | ok: [testbed-manager] 2026-02-05 00:50:18.131656 | orchestrator | 2026-02-05 00:50:18.131660 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-05 00:50:18.131664 | orchestrator | Thursday 05 February 2026 00:48:31 +0000 (0:00:00.446) 0:02:34.351 ***** 2026-02-05 00:50:18.131667 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 00:50:18.131671 | orchestrator | 2026-02-05 00:50:18.131675 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-05 00:50:18.131679 | orchestrator | Thursday 05 February 2026 00:48:31 +0000 (0:00:00.256) 0:02:34.608 ***** 2026-02-05 00:50:18.131682 | orchestrator | ok: [testbed-manager] 2026-02-05 00:50:18.131686 | orchestrator | 2026-02-05 00:50:18.131690 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-05 00:50:18.131693 | orchestrator | Thursday 05 February 2026 00:48:33 +0000 (0:00:01.266) 0:02:35.874 ***** 2026-02-05 00:50:18.131700 | orchestrator | ok: [testbed-manager] 2026-02-05 00:50:18.131704 | orchestrator | 2026-02-05 00:50:18.131708 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-05 00:50:18.131712 | orchestrator | Thursday 05 February 2026 00:48:34 +0000 (0:00:01.742) 0:02:37.616 ***** 2026-02-05 00:50:18.131715 | orchestrator | changed: [testbed-manager] 2026-02-05 00:50:18.131719 | orchestrator | 2026-02-05 00:50:18.131723 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-05 00:50:18.131726 | orchestrator | Thursday 05 February 2026 00:48:35 +0000 (0:00:00.749) 0:02:38.366 ***** 2026-02-05 00:50:18.131730 | orchestrator | ok: [testbed-manager] 2026-02-05 00:50:18.131734 | orchestrator | 2026-02-05 00:50:18.131740 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-05 00:50:18.131744 | orchestrator | Thursday 05 February 2026 00:48:36 +0000 (0:00:00.861) 0:02:39.227 ***** 2026-02-05 00:50:18.131748 | orchestrator | changed: [testbed-manager] 2026-02-05 00:50:18.131752 | orchestrator | 2026-02-05 00:50:18.131755 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-05 00:50:18.131761 | orchestrator | Thursday 05 February 2026 00:48:43 +0000 (0:00:07.409) 0:02:46.637 ***** 2026-02-05 00:50:18.131765 | orchestrator | changed: [testbed-manager] 2026-02-05 00:50:18.131769 | orchestrator | 2026-02-05 00:50:18.131772 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-05 00:50:18.131776 | orchestrator | Thursday 05 February 2026 00:49:00 +0000 (0:00:16.775) 0:03:03.412 ***** 2026-02-05 00:50:18.131780 | orchestrator | ok: [testbed-manager] 2026-02-05 00:50:18.131784 | orchestrator | 2026-02-05 00:50:18.131787 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-05 00:50:18.131791 | orchestrator | 2026-02-05 00:50:18.131795 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-05 00:50:18.131799 | orchestrator | Thursday 05 February 2026 00:49:01 +0000 (0:00:00.731) 0:03:04.144 ***** 2026-02-05 00:50:18.131802 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.131806 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.131810 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.131814 | orchestrator | 2026-02-05 00:50:18.131817 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-05 00:50:18.131821 | orchestrator | Thursday 05 February 2026 00:49:01 +0000 (0:00:00.343) 0:03:04.487 ***** 2026-02-05 00:50:18.131825 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.131828 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.131832 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.131836 | orchestrator | 2026-02-05 00:50:18.131840 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-05 00:50:18.131843 | orchestrator | Thursday 05 February 2026 00:49:02 +0000 (0:00:00.292) 0:03:04.780 ***** 2026-02-05 00:50:18.131847 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-1, testbed-node-0, testbed-node-2 2026-02-05 00:50:18.131851 | orchestrator | 2026-02-05 00:50:18.131855 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-05 00:50:18.131858 | orchestrator | Thursday 05 February 2026 00:49:02 +0000 (0:00:00.779) 0:03:05.559 ***** 2026-02-05 00:50:18.131862 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-05 00:50:18.131866 | orchestrator | 2026-02-05 00:50:18.131870 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-05 00:50:18.131873 | orchestrator | Thursday 05 February 2026 00:49:03 +0000 (0:00:00.893) 0:03:06.453 ***** 2026-02-05 00:50:18.131877 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 00:50:18.131881 | orchestrator | 2026-02-05 00:50:18.131885 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-05 00:50:18.131888 | orchestrator | Thursday 05 February 2026 00:49:04 +0000 (0:00:00.837) 0:03:07.290 ***** 2026-02-05 00:50:18.131892 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.131898 | orchestrator | 2026-02-05 00:50:18.131902 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-05 00:50:18.131906 | orchestrator | Thursday 05 February 2026 00:49:04 +0000 (0:00:00.118) 0:03:07.409 ***** 2026-02-05 00:50:18.131910 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 00:50:18.131913 | orchestrator | 2026-02-05 00:50:18.131917 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-05 00:50:18.131933 | orchestrator | Thursday 05 February 2026 00:49:05 +0000 (0:00:00.963) 0:03:08.373 ***** 2026-02-05 00:50:18.131952 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.131965 | orchestrator | 2026-02-05 00:50:18.131972 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-05 00:50:18.131977 | orchestrator | Thursday 05 February 2026 00:49:05 +0000 (0:00:00.123) 0:03:08.496 ***** 2026-02-05 00:50:18.131983 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.131989 | orchestrator | 2026-02-05 00:50:18.131995 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-05 00:50:18.132002 | orchestrator | Thursday 05 February 2026 00:49:05 +0000 (0:00:00.110) 0:03:08.606 ***** 2026-02-05 00:50:18.132008 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.132014 | orchestrator | 2026-02-05 00:50:18.132020 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-05 00:50:18.132026 | orchestrator | Thursday 05 February 2026 00:49:05 +0000 (0:00:00.105) 0:03:08.712 ***** 2026-02-05 00:50:18.132032 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.132038 | orchestrator | 2026-02-05 00:50:18.132044 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-05 00:50:18.132051 | orchestrator | Thursday 05 February 2026 00:49:06 +0000 (0:00:00.123) 0:03:08.836 ***** 2026-02-05 00:50:18.132057 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-05 00:50:18.132063 | orchestrator | 2026-02-05 00:50:18.132070 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-05 00:50:18.132077 | orchestrator | Thursday 05 February 2026 00:49:11 +0000 (0:00:05.317) 0:03:14.154 ***** 2026-02-05 00:50:18.132084 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-05 00:50:18.132090 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-05 00:50:18.132097 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-05 00:50:18.132101 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-05 00:50:18.132104 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-05 00:50:18.132108 | orchestrator | 2026-02-05 00:50:18.132112 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-05 00:50:18.132116 | orchestrator | Thursday 05 February 2026 00:49:53 +0000 (0:00:42.040) 0:03:56.194 ***** 2026-02-05 00:50:18.132123 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 00:50:18.132127 | orchestrator | 2026-02-05 00:50:18.132131 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-05 00:50:18.132135 | orchestrator | Thursday 05 February 2026 00:49:54 +0000 (0:00:01.392) 0:03:57.587 ***** 2026-02-05 00:50:18.132138 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-05 00:50:18.132142 | orchestrator | 2026-02-05 00:50:18.132149 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-05 00:50:18.132153 | orchestrator | Thursday 05 February 2026 00:49:56 +0000 (0:00:01.511) 0:03:59.099 ***** 2026-02-05 00:50:18.132157 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-05 00:50:18.132161 | orchestrator | 2026-02-05 00:50:18.132165 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-05 00:50:18.132168 | orchestrator | Thursday 05 February 2026 00:49:57 +0000 (0:00:01.010) 0:04:00.109 ***** 2026-02-05 00:50:18.132172 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.132176 | orchestrator | 2026-02-05 00:50:18.132183 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-05 00:50:18.132187 | orchestrator | Thursday 05 February 2026 00:49:57 +0000 (0:00:00.130) 0:04:00.240 ***** 2026-02-05 00:50:18.132190 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-05 00:50:18.132194 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-05 00:50:18.132255 | orchestrator | 2026-02-05 00:50:18.132262 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-05 00:50:18.132267 | orchestrator | Thursday 05 February 2026 00:49:59 +0000 (0:00:01.726) 0:04:01.966 ***** 2026-02-05 00:50:18.132273 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.132278 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.132284 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.132290 | orchestrator | 2026-02-05 00:50:18.132296 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-05 00:50:18.132302 | orchestrator | Thursday 05 February 2026 00:49:59 +0000 (0:00:00.294) 0:04:02.261 ***** 2026-02-05 00:50:18.132309 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.132315 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.132321 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.132325 | orchestrator | 2026-02-05 00:50:18.132328 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-05 00:50:18.132332 | orchestrator | 2026-02-05 00:50:18.132336 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-05 00:50:18.132340 | orchestrator | Thursday 05 February 2026 00:50:00 +0000 (0:00:01.077) 0:04:03.338 ***** 2026-02-05 00:50:18.132343 | orchestrator | ok: [testbed-manager] 2026-02-05 00:50:18.132347 | orchestrator | 2026-02-05 00:50:18.132351 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-05 00:50:18.132354 | orchestrator | Thursday 05 February 2026 00:50:00 +0000 (0:00:00.126) 0:04:03.465 ***** 2026-02-05 00:50:18.132358 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-05 00:50:18.132362 | orchestrator | 2026-02-05 00:50:18.132365 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-05 00:50:18.132369 | orchestrator | Thursday 05 February 2026 00:50:00 +0000 (0:00:00.208) 0:04:03.673 ***** 2026-02-05 00:50:18.132373 | orchestrator | changed: [testbed-manager] 2026-02-05 00:50:18.132376 | orchestrator | 2026-02-05 00:50:18.132380 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-05 00:50:18.132384 | orchestrator | 2026-02-05 00:50:18.132388 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-05 00:50:18.132391 | orchestrator | Thursday 05 February 2026 00:50:05 +0000 (0:00:04.787) 0:04:08.460 ***** 2026-02-05 00:50:18.132395 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:50:18.132399 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:50:18.132402 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:50:18.132406 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:18.132410 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:18.132413 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:18.132417 | orchestrator | 2026-02-05 00:50:18.132421 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-05 00:50:18.132424 | orchestrator | Thursday 05 February 2026 00:50:06 +0000 (0:00:00.722) 0:04:09.182 ***** 2026-02-05 00:50:18.132428 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-05 00:50:18.132432 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-05 00:50:18.132435 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-05 00:50:18.132439 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-05 00:50:18.132443 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-05 00:50:18.132452 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-05 00:50:18.132455 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-05 00:50:18.132459 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-05 00:50:18.132463 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-05 00:50:18.132467 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-05 00:50:18.132470 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-05 00:50:18.132474 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-05 00:50:18.132481 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-05 00:50:18.132485 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-05 00:50:18.132489 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-05 00:50:18.132493 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-05 00:50:18.132499 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-05 00:50:18.132503 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-05 00:50:18.132507 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-05 00:50:18.132511 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-05 00:50:18.132514 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-05 00:50:18.132518 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-05 00:50:18.132522 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-05 00:50:18.132526 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-05 00:50:18.132529 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-05 00:50:18.132533 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-05 00:50:18.132537 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-05 00:50:18.132540 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-05 00:50:18.132544 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-05 00:50:18.132548 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-05 00:50:18.132552 | orchestrator | 2026-02-05 00:50:18.132555 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-05 00:50:18.132559 | orchestrator | Thursday 05 February 2026 00:50:16 +0000 (0:00:10.028) 0:04:19.211 ***** 2026-02-05 00:50:18.132563 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:50:18.132567 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:50:18.132570 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:50:18.132574 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.132578 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.132582 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.132585 | orchestrator | 2026-02-05 00:50:18.132589 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-05 00:50:18.132593 | orchestrator | Thursday 05 February 2026 00:50:16 +0000 (0:00:00.493) 0:04:19.704 ***** 2026-02-05 00:50:18.132597 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:50:18.132600 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:50:18.132604 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:50:18.132611 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:18.132614 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:18.132618 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:18.132622 | orchestrator | 2026-02-05 00:50:18.132626 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:50:18.132630 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:50:18.132634 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-05 00:50:18.132639 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-05 00:50:18.132642 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-05 00:50:18.132646 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-05 00:50:18.132650 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-05 00:50:18.132654 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-05 00:50:18.132657 | orchestrator | 2026-02-05 00:50:18.132661 | orchestrator | 2026-02-05 00:50:18.132665 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:50:18.132669 | orchestrator | Thursday 05 February 2026 00:50:17 +0000 (0:00:00.380) 0:04:20.085 ***** 2026-02-05 00:50:18.132672 | orchestrator | =============================================================================== 2026-02-05 00:50:18.132676 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.66s 2026-02-05 00:50:18.132680 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.04s 2026-02-05 00:50:18.132691 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.46s 2026-02-05 00:50:18.132698 | orchestrator | kubectl : Install required packages ------------------------------------ 16.77s 2026-02-05 00:50:18.132702 | orchestrator | Manage labels ---------------------------------------------------------- 10.03s 2026-02-05 00:50:18.132705 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.18s 2026-02-05 00:50:18.132709 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.41s 2026-02-05 00:50:18.132713 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.32s 2026-02-05 00:50:18.132717 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.30s 2026-02-05 00:50:18.132721 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.79s 2026-02-05 00:50:18.132725 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 3.27s 2026-02-05 00:50:18.132729 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.77s 2026-02-05 00:50:18.132732 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.76s 2026-02-05 00:50:18.132736 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 2.63s 2026-02-05 00:50:18.132740 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.55s 2026-02-05 00:50:18.132744 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.09s 2026-02-05 00:50:18.132748 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.90s 2026-02-05 00:50:18.132751 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.88s 2026-02-05 00:50:18.132758 | orchestrator | k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry --- 1.76s 2026-02-05 00:50:18.132762 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.74s 2026-02-05 00:50:18.132766 | orchestrator | 2026-02-05 00:50:18 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:18.132770 | orchestrator | 2026-02-05 00:50:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:21.162292 | orchestrator | 2026-02-05 00:50:21 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:21.163094 | orchestrator | 2026-02-05 00:50:21 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:21.163945 | orchestrator | 2026-02-05 00:50:21 | INFO  | Task 6142f028-12a9-47d9-acd5-934c565ab2d9 is in state STARTED 2026-02-05 00:50:21.164883 | orchestrator | 2026-02-05 00:50:21 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:50:21.167978 | orchestrator | 2026-02-05 00:50:21 | INFO  | Task 0b736413-4b51-4391-bb2d-b1ea3884acb3 is in state STARTED 2026-02-05 00:50:21.171662 | orchestrator | 2026-02-05 00:50:21 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:21.171699 | orchestrator | 2026-02-05 00:50:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:24.211614 | orchestrator | 2026-02-05 00:50:24 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:24.211694 | orchestrator | 2026-02-05 00:50:24 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:24.211710 | orchestrator | 2026-02-05 00:50:24 | INFO  | Task 6142f028-12a9-47d9-acd5-934c565ab2d9 is in state STARTED 2026-02-05 00:50:24.211723 | orchestrator | 2026-02-05 00:50:24 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:50:24.211735 | orchestrator | 2026-02-05 00:50:24 | INFO  | Task 0b736413-4b51-4391-bb2d-b1ea3884acb3 is in state STARTED 2026-02-05 00:50:24.211747 | orchestrator | 2026-02-05 00:50:24 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:24.211759 | orchestrator | 2026-02-05 00:50:24 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:27.241320 | orchestrator | 2026-02-05 00:50:27 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:27.242493 | orchestrator | 2026-02-05 00:50:27 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:27.242662 | orchestrator | 2026-02-05 00:50:27 | INFO  | Task 6142f028-12a9-47d9-acd5-934c565ab2d9 is in state SUCCESS 2026-02-05 00:50:27.244004 | orchestrator | 2026-02-05 00:50:27 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:50:27.246200 | orchestrator | 2026-02-05 00:50:27 | INFO  | Task 0b736413-4b51-4391-bb2d-b1ea3884acb3 is in state STARTED 2026-02-05 00:50:27.247391 | orchestrator | 2026-02-05 00:50:27 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:27.247424 | orchestrator | 2026-02-05 00:50:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:30.278588 | orchestrator | 2026-02-05 00:50:30 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:30.279839 | orchestrator | 2026-02-05 00:50:30 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:30.279957 | orchestrator | 2026-02-05 00:50:30 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:50:30.280092 | orchestrator | 2026-02-05 00:50:30 | INFO  | Task 0b736413-4b51-4391-bb2d-b1ea3884acb3 is in state SUCCESS 2026-02-05 00:50:30.280885 | orchestrator | 2026-02-05 00:50:30 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:30.280911 | orchestrator | 2026-02-05 00:50:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:33.322653 | orchestrator | 2026-02-05 00:50:33 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:33.323061 | orchestrator | 2026-02-05 00:50:33 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:33.324894 | orchestrator | 2026-02-05 00:50:33 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:50:33.327224 | orchestrator | 2026-02-05 00:50:33 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:33.327403 | orchestrator | 2026-02-05 00:50:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:36.354137 | orchestrator | 2026-02-05 00:50:36 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:36.355074 | orchestrator | 2026-02-05 00:50:36 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:36.356306 | orchestrator | 2026-02-05 00:50:36 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:50:36.358560 | orchestrator | 2026-02-05 00:50:36 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:36.358694 | orchestrator | 2026-02-05 00:50:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:39.387492 | orchestrator | 2026-02-05 00:50:39 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:39.387813 | orchestrator | 2026-02-05 00:50:39 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:39.391369 | orchestrator | 2026-02-05 00:50:39 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:50:39.393532 | orchestrator | 2026-02-05 00:50:39 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:39.394174 | orchestrator | 2026-02-05 00:50:39 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:42.424702 | orchestrator | 2026-02-05 00:50:42 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:42.425698 | orchestrator | 2026-02-05 00:50:42 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:42.426401 | orchestrator | 2026-02-05 00:50:42 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:50:42.427431 | orchestrator | 2026-02-05 00:50:42 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:42.427461 | orchestrator | 2026-02-05 00:50:42 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:45.515002 | orchestrator | 2026-02-05 00:50:45 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:45.516849 | orchestrator | 2026-02-05 00:50:45 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:45.518805 | orchestrator | 2026-02-05 00:50:45 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:50:45.519732 | orchestrator | 2026-02-05 00:50:45 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:45.519761 | orchestrator | 2026-02-05 00:50:45 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:48.568888 | orchestrator | 2026-02-05 00:50:48 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:48.569567 | orchestrator | 2026-02-05 00:50:48 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:48.570665 | orchestrator | 2026-02-05 00:50:48 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state STARTED 2026-02-05 00:50:48.571613 | orchestrator | 2026-02-05 00:50:48 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:48.571670 | orchestrator | 2026-02-05 00:50:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:51.608950 | orchestrator | 2026-02-05 00:50:51 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:51.610684 | orchestrator | 2026-02-05 00:50:51 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:51.612138 | orchestrator | 2026-02-05 00:50:51 | INFO  | Task 4f7504d0-2faa-47be-b3aa-f3f1428d7497 is in state SUCCESS 2026-02-05 00:50:51.613685 | orchestrator | 2026-02-05 00:50:51.613737 | orchestrator | 2026-02-05 00:50:51.613847 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-05 00:50:51.613871 | orchestrator | 2026-02-05 00:50:51.613885 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-05 00:50:51.613898 | orchestrator | Thursday 05 February 2026 00:50:21 +0000 (0:00:00.144) 0:00:00.144 ***** 2026-02-05 00:50:51.613912 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-05 00:50:51.613927 | orchestrator | 2026-02-05 00:50:51.613940 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-05 00:50:51.613954 | orchestrator | Thursday 05 February 2026 00:50:22 +0000 (0:00:00.679) 0:00:00.824 ***** 2026-02-05 00:50:51.613968 | orchestrator | changed: [testbed-manager] 2026-02-05 00:50:51.613982 | orchestrator | 2026-02-05 00:50:51.613995 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-05 00:50:51.614009 | orchestrator | Thursday 05 February 2026 00:50:23 +0000 (0:00:01.169) 0:00:01.993 ***** 2026-02-05 00:50:51.614216 | orchestrator | changed: [testbed-manager] 2026-02-05 00:50:51.614232 | orchestrator | 2026-02-05 00:50:51.614246 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:50:51.614261 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:50:51.614276 | orchestrator | 2026-02-05 00:50:51.614289 | orchestrator | 2026-02-05 00:50:51.614302 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:50:51.614316 | orchestrator | Thursday 05 February 2026 00:50:24 +0000 (0:00:00.526) 0:00:02.519 ***** 2026-02-05 00:50:51.614330 | orchestrator | =============================================================================== 2026-02-05 00:50:51.614344 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.17s 2026-02-05 00:50:51.614376 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.68s 2026-02-05 00:50:51.614390 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.53s 2026-02-05 00:50:51.614402 | orchestrator | 2026-02-05 00:50:51.614416 | orchestrator | 2026-02-05 00:50:51.614429 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-05 00:50:51.614443 | orchestrator | 2026-02-05 00:50:51.614457 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-05 00:50:51.614470 | orchestrator | Thursday 05 February 2026 00:50:21 +0000 (0:00:00.135) 0:00:00.135 ***** 2026-02-05 00:50:51.614483 | orchestrator | ok: [testbed-manager] 2026-02-05 00:50:51.614497 | orchestrator | 2026-02-05 00:50:51.614510 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-05 00:50:51.614524 | orchestrator | Thursday 05 February 2026 00:50:21 +0000 (0:00:00.507) 0:00:00.643 ***** 2026-02-05 00:50:51.614538 | orchestrator | ok: [testbed-manager] 2026-02-05 00:50:51.614551 | orchestrator | 2026-02-05 00:50:51.614588 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-05 00:50:51.614603 | orchestrator | Thursday 05 February 2026 00:50:22 +0000 (0:00:00.710) 0:00:01.354 ***** 2026-02-05 00:50:51.614616 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-05 00:50:51.614630 | orchestrator | 2026-02-05 00:50:51.614643 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-05 00:50:51.614656 | orchestrator | Thursday 05 February 2026 00:50:22 +0000 (0:00:00.596) 0:00:01.951 ***** 2026-02-05 00:50:51.614669 | orchestrator | changed: [testbed-manager] 2026-02-05 00:50:51.614682 | orchestrator | 2026-02-05 00:50:51.614695 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-05 00:50:51.614708 | orchestrator | Thursday 05 February 2026 00:50:24 +0000 (0:00:01.656) 0:00:03.607 ***** 2026-02-05 00:50:51.614722 | orchestrator | changed: [testbed-manager] 2026-02-05 00:50:51.614735 | orchestrator | 2026-02-05 00:50:51.614748 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-05 00:50:51.614761 | orchestrator | Thursday 05 February 2026 00:50:24 +0000 (0:00:00.516) 0:00:04.124 ***** 2026-02-05 00:50:51.614775 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-05 00:50:51.614789 | orchestrator | 2026-02-05 00:50:51.614803 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-05 00:50:51.614816 | orchestrator | Thursday 05 February 2026 00:50:26 +0000 (0:00:01.490) 0:00:05.615 ***** 2026-02-05 00:50:51.614830 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-05 00:50:51.614843 | orchestrator | 2026-02-05 00:50:51.614857 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-05 00:50:51.614870 | orchestrator | Thursday 05 February 2026 00:50:27 +0000 (0:00:00.774) 0:00:06.389 ***** 2026-02-05 00:50:51.614883 | orchestrator | ok: [testbed-manager] 2026-02-05 00:50:51.614896 | orchestrator | 2026-02-05 00:50:51.614909 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-05 00:50:51.614923 | orchestrator | Thursday 05 February 2026 00:50:27 +0000 (0:00:00.440) 0:00:06.830 ***** 2026-02-05 00:50:51.614936 | orchestrator | ok: [testbed-manager] 2026-02-05 00:50:51.614950 | orchestrator | 2026-02-05 00:50:51.614963 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:50:51.614977 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:50:51.614990 | orchestrator | 2026-02-05 00:50:51.615004 | orchestrator | 2026-02-05 00:50:51.615017 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:50:51.615032 | orchestrator | Thursday 05 February 2026 00:50:27 +0000 (0:00:00.274) 0:00:07.104 ***** 2026-02-05 00:50:51.615045 | orchestrator | =============================================================================== 2026-02-05 00:50:51.615059 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.66s 2026-02-05 00:50:51.615072 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.49s 2026-02-05 00:50:51.615086 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.77s 2026-02-05 00:50:51.615143 | orchestrator | Create .kube directory -------------------------------------------------- 0.71s 2026-02-05 00:50:51.615159 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.60s 2026-02-05 00:50:51.615172 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.52s 2026-02-05 00:50:51.615186 | orchestrator | Get home directory of operator user ------------------------------------- 0.51s 2026-02-05 00:50:51.615199 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.44s 2026-02-05 00:50:51.615212 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.27s 2026-02-05 00:50:51.615225 | orchestrator | 2026-02-05 00:50:51.615239 | orchestrator | 2026-02-05 00:50:51.615252 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-02-05 00:50:51.615276 | orchestrator | 2026-02-05 00:50:51.615290 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-05 00:50:51.615303 | orchestrator | Thursday 05 February 2026 00:48:36 +0000 (0:00:00.320) 0:00:00.320 ***** 2026-02-05 00:50:51.615316 | orchestrator | ok: [localhost] => { 2026-02-05 00:50:51.615330 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-02-05 00:50:51.615343 | orchestrator | } 2026-02-05 00:50:51.615357 | orchestrator | 2026-02-05 00:50:51.615371 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-02-05 00:50:51.615384 | orchestrator | Thursday 05 February 2026 00:48:36 +0000 (0:00:00.085) 0:00:00.406 ***** 2026-02-05 00:50:51.615398 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-02-05 00:50:51.615413 | orchestrator | ...ignoring 2026-02-05 00:50:51.615428 | orchestrator | 2026-02-05 00:50:51.615442 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-02-05 00:50:51.615462 | orchestrator | Thursday 05 February 2026 00:48:39 +0000 (0:00:03.641) 0:00:04.048 ***** 2026-02-05 00:50:51.615476 | orchestrator | skipping: [localhost] 2026-02-05 00:50:51.615488 | orchestrator | 2026-02-05 00:50:51.615502 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-02-05 00:50:51.615515 | orchestrator | Thursday 05 February 2026 00:48:39 +0000 (0:00:00.211) 0:00:04.259 ***** 2026-02-05 00:50:51.615529 | orchestrator | ok: [localhost] 2026-02-05 00:50:51.615542 | orchestrator | 2026-02-05 00:50:51.615555 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:50:51.615568 | orchestrator | 2026-02-05 00:50:51.615581 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:50:51.615594 | orchestrator | Thursday 05 February 2026 00:48:40 +0000 (0:00:00.243) 0:00:04.503 ***** 2026-02-05 00:50:51.615608 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:51.615622 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:51.615635 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:51.615648 | orchestrator | 2026-02-05 00:50:51.615662 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:50:51.615675 | orchestrator | Thursday 05 February 2026 00:48:40 +0000 (0:00:00.556) 0:00:05.060 ***** 2026-02-05 00:50:51.615688 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-05 00:50:51.615701 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-05 00:50:51.615715 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-05 00:50:51.615729 | orchestrator | 2026-02-05 00:50:51.615742 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-05 00:50:51.615755 | orchestrator | 2026-02-05 00:50:51.615768 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-05 00:50:51.615781 | orchestrator | Thursday 05 February 2026 00:48:41 +0000 (0:00:00.625) 0:00:05.685 ***** 2026-02-05 00:50:51.615794 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:50:51.615808 | orchestrator | 2026-02-05 00:50:51.615820 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-05 00:50:51.615833 | orchestrator | Thursday 05 February 2026 00:48:42 +0000 (0:00:00.744) 0:00:06.430 ***** 2026-02-05 00:50:51.615846 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:51.615859 | orchestrator | 2026-02-05 00:50:51.615872 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-05 00:50:51.615886 | orchestrator | Thursday 05 February 2026 00:48:43 +0000 (0:00:01.108) 0:00:07.538 ***** 2026-02-05 00:50:51.615899 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:51.615912 | orchestrator | 2026-02-05 00:50:51.615925 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-05 00:50:51.615938 | orchestrator | Thursday 05 February 2026 00:48:43 +0000 (0:00:00.419) 0:00:07.957 ***** 2026-02-05 00:50:51.615959 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:51.615972 | orchestrator | 2026-02-05 00:50:51.615985 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-05 00:50:51.615998 | orchestrator | Thursday 05 February 2026 00:48:44 +0000 (0:00:00.613) 0:00:08.570 ***** 2026-02-05 00:50:51.616011 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:51.616024 | orchestrator | 2026-02-05 00:50:51.616037 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-05 00:50:51.616050 | orchestrator | Thursday 05 February 2026 00:48:44 +0000 (0:00:00.332) 0:00:08.903 ***** 2026-02-05 00:50:51.616062 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:51.616075 | orchestrator | 2026-02-05 00:50:51.616089 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-05 00:50:51.616102 | orchestrator | Thursday 05 February 2026 00:48:45 +0000 (0:00:00.520) 0:00:09.423 ***** 2026-02-05 00:50:51.616135 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:50:51.616151 | orchestrator | 2026-02-05 00:50:51.616164 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-05 00:50:51.616189 | orchestrator | Thursday 05 February 2026 00:48:45 +0000 (0:00:00.852) 0:00:10.276 ***** 2026-02-05 00:50:51.616202 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:51.616216 | orchestrator | 2026-02-05 00:50:51.616227 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-05 00:50:51.616236 | orchestrator | Thursday 05 February 2026 00:48:47 +0000 (0:00:01.153) 0:00:11.429 ***** 2026-02-05 00:50:51.616244 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:51.616252 | orchestrator | 2026-02-05 00:50:51.616260 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-05 00:50:51.616268 | orchestrator | Thursday 05 February 2026 00:48:48 +0000 (0:00:01.029) 0:00:12.459 ***** 2026-02-05 00:50:51.616275 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:51.616283 | orchestrator | 2026-02-05 00:50:51.616291 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-05 00:50:51.616299 | orchestrator | Thursday 05 February 2026 00:48:48 +0000 (0:00:00.737) 0:00:13.197 ***** 2026-02-05 00:50:51.616318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:50:51.616332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:50:51.616348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:50:51.616358 | orchestrator | 2026-02-05 00:50:51.616366 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-05 00:50:51.616374 | orchestrator | Thursday 05 February 2026 00:48:49 +0000 (0:00:01.021) 0:00:14.218 ***** 2026-02-05 00:50:51.616389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:50:51.616404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:50:51.616413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:50:51.616428 | orchestrator | 2026-02-05 00:50:51.616436 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-05 00:50:51.616444 | orchestrator | Thursday 05 February 2026 00:48:53 +0000 (0:00:03.167) 0:00:17.386 ***** 2026-02-05 00:50:51.616452 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-05 00:50:51.616460 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-05 00:50:51.616468 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-05 00:50:51.616476 | orchestrator | 2026-02-05 00:50:51.616484 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-05 00:50:51.616492 | orchestrator | Thursday 05 February 2026 00:48:55 +0000 (0:00:01.929) 0:00:19.315 ***** 2026-02-05 00:50:51.616500 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-05 00:50:51.616508 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-05 00:50:51.616516 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-05 00:50:51.616525 | orchestrator | 2026-02-05 00:50:51.616533 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-05 00:50:51.616545 | orchestrator | Thursday 05 February 2026 00:48:56 +0000 (0:00:01.791) 0:00:21.107 ***** 2026-02-05 00:50:51.616554 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-05 00:50:51.616562 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-05 00:50:51.616570 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-05 00:50:51.616578 | orchestrator | 2026-02-05 00:50:51.616586 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-05 00:50:51.616594 | orchestrator | Thursday 05 February 2026 00:48:58 +0000 (0:00:01.396) 0:00:22.503 ***** 2026-02-05 00:50:51.616602 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-05 00:50:51.616610 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-05 00:50:51.616618 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-05 00:50:51.616626 | orchestrator | 2026-02-05 00:50:51.616634 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-05 00:50:51.616642 | orchestrator | Thursday 05 February 2026 00:49:00 +0000 (0:00:02.607) 0:00:25.111 ***** 2026-02-05 00:50:51.616650 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-05 00:50:51.616658 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-05 00:50:51.616666 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-05 00:50:51.616674 | orchestrator | 2026-02-05 00:50:51.616687 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-05 00:50:51.616700 | orchestrator | Thursday 05 February 2026 00:49:02 +0000 (0:00:01.702) 0:00:26.813 ***** 2026-02-05 00:50:51.616708 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-05 00:50:51.616716 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-05 00:50:51.616724 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-05 00:50:51.616732 | orchestrator | 2026-02-05 00:50:51.616740 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-05 00:50:51.616748 | orchestrator | Thursday 05 February 2026 00:49:04 +0000 (0:00:01.962) 0:00:28.776 ***** 2026-02-05 00:50:51.616756 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:51.616764 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:51.616772 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:51.616780 | orchestrator | 2026-02-05 00:50:51.616788 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-05 00:50:51.616796 | orchestrator | Thursday 05 February 2026 00:49:05 +0000 (0:00:00.749) 0:00:29.525 ***** 2026-02-05 00:50:51.616805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:50:51.616819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:50:51.616829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:50:51.616843 | orchestrator | 2026-02-05 00:50:51.616850 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-05 00:50:51.616862 | orchestrator | Thursday 05 February 2026 00:49:07 +0000 (0:00:01.801) 0:00:31.327 ***** 2026-02-05 00:50:51.616870 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:51.616879 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:51.616887 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:51.616895 | orchestrator | 2026-02-05 00:50:51.616902 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-05 00:50:51.616910 | orchestrator | Thursday 05 February 2026 00:49:08 +0000 (0:00:01.016) 0:00:32.344 ***** 2026-02-05 00:50:51.616918 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:51.616926 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:51.616934 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:51.616942 | orchestrator | 2026-02-05 00:50:51.616950 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-05 00:50:51.616958 | orchestrator | Thursday 05 February 2026 00:49:15 +0000 (0:00:07.129) 0:00:39.473 ***** 2026-02-05 00:50:51.616966 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:51.616975 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:51.616983 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:51.616991 | orchestrator | 2026-02-05 00:50:51.616999 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-05 00:50:51.617007 | orchestrator | 2026-02-05 00:50:51.617015 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-05 00:50:51.617023 | orchestrator | Thursday 05 February 2026 00:49:15 +0000 (0:00:00.782) 0:00:40.256 ***** 2026-02-05 00:50:51.617031 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:51.617039 | orchestrator | 2026-02-05 00:50:51.617047 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-05 00:50:51.617055 | orchestrator | Thursday 05 February 2026 00:49:16 +0000 (0:00:00.798) 0:00:41.054 ***** 2026-02-05 00:50:51.617063 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:50:51.617071 | orchestrator | 2026-02-05 00:50:51.617079 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-05 00:50:51.617087 | orchestrator | Thursday 05 February 2026 00:49:16 +0000 (0:00:00.223) 0:00:41.278 ***** 2026-02-05 00:50:51.617095 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:51.617103 | orchestrator | 2026-02-05 00:50:51.617111 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-05 00:50:51.617167 | orchestrator | Thursday 05 February 2026 00:49:18 +0000 (0:00:01.896) 0:00:43.175 ***** 2026-02-05 00:50:51.617175 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:50:51.617184 | orchestrator | 2026-02-05 00:50:51.617191 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-05 00:50:51.617199 | orchestrator | 2026-02-05 00:50:51.617207 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-05 00:50:51.617215 | orchestrator | Thursday 05 February 2026 00:50:13 +0000 (0:00:54.591) 0:01:37.766 ***** 2026-02-05 00:50:51.617223 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:51.617230 | orchestrator | 2026-02-05 00:50:51.617238 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-05 00:50:51.617246 | orchestrator | Thursday 05 February 2026 00:50:13 +0000 (0:00:00.462) 0:01:38.229 ***** 2026-02-05 00:50:51.617254 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:50:51.617262 | orchestrator | 2026-02-05 00:50:51.617270 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-05 00:50:51.617286 | orchestrator | Thursday 05 February 2026 00:50:14 +0000 (0:00:00.164) 0:01:38.393 ***** 2026-02-05 00:50:51.617293 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:51.617302 | orchestrator | 2026-02-05 00:50:51.617309 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-05 00:50:51.617317 | orchestrator | Thursday 05 February 2026 00:50:21 +0000 (0:00:07.042) 0:01:45.435 ***** 2026-02-05 00:50:51.617325 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:50:51.617333 | orchestrator | 2026-02-05 00:50:51.617341 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-05 00:50:51.617349 | orchestrator | 2026-02-05 00:50:51.617356 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-05 00:50:51.617364 | orchestrator | Thursday 05 February 2026 00:50:30 +0000 (0:00:08.937) 0:01:54.373 ***** 2026-02-05 00:50:51.617372 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:51.617380 | orchestrator | 2026-02-05 00:50:51.617393 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-05 00:50:51.617402 | orchestrator | Thursday 05 February 2026 00:50:30 +0000 (0:00:00.748) 0:01:55.121 ***** 2026-02-05 00:50:51.617410 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:50:51.617418 | orchestrator | 2026-02-05 00:50:51.617425 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-05 00:50:51.617433 | orchestrator | Thursday 05 February 2026 00:50:31 +0000 (0:00:00.384) 0:01:55.506 ***** 2026-02-05 00:50:51.617441 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:51.617449 | orchestrator | 2026-02-05 00:50:51.617457 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-05 00:50:51.617465 | orchestrator | Thursday 05 February 2026 00:50:37 +0000 (0:00:06.442) 0:02:01.949 ***** 2026-02-05 00:50:51.617473 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:50:51.617481 | orchestrator | 2026-02-05 00:50:51.617489 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-05 00:50:51.617497 | orchestrator | 2026-02-05 00:50:51.617505 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-05 00:50:51.617513 | orchestrator | Thursday 05 February 2026 00:50:47 +0000 (0:00:09.633) 0:02:11.582 ***** 2026-02-05 00:50:51.617520 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:50:51.617528 | orchestrator | 2026-02-05 00:50:51.617536 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-05 00:50:51.617544 | orchestrator | Thursday 05 February 2026 00:50:47 +0000 (0:00:00.592) 0:02:12.175 ***** 2026-02-05 00:50:51.617552 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:50:51.617560 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:50:51.617568 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:50:51.617576 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-05 00:50:51.617584 | orchestrator | enable_outward_rabbitmq_True 2026-02-05 00:50:51.617591 | orchestrator | 2026-02-05 00:50:51.617600 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-05 00:50:51.617617 | orchestrator | skipping: no hosts matched 2026-02-05 00:50:51.617626 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-05 00:50:51.617634 | orchestrator | outward_rabbitmq_restart 2026-02-05 00:50:51.617642 | orchestrator | 2026-02-05 00:50:51.617650 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-05 00:50:51.617658 | orchestrator | skipping: no hosts matched 2026-02-05 00:50:51.617666 | orchestrator | 2026-02-05 00:50:51.617674 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-05 00:50:51.617682 | orchestrator | skipping: no hosts matched 2026-02-05 00:50:51.617690 | orchestrator | 2026-02-05 00:50:51.617698 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:50:51.617706 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-05 00:50:51.617720 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-05 00:50:51.617728 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:50:51.617735 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 00:50:51.617742 | orchestrator | 2026-02-05 00:50:51.617749 | orchestrator | 2026-02-05 00:50:51.617756 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:50:51.617763 | orchestrator | Thursday 05 February 2026 00:50:50 +0000 (0:00:02.206) 0:02:14.381 ***** 2026-02-05 00:50:51.617769 | orchestrator | =============================================================================== 2026-02-05 00:50:51.617776 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 73.16s 2026-02-05 00:50:51.617783 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 15.38s 2026-02-05 00:50:51.617790 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.13s 2026-02-05 00:50:51.617796 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.64s 2026-02-05 00:50:51.617803 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.17s 2026-02-05 00:50:51.617810 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.61s 2026-02-05 00:50:51.617817 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.21s 2026-02-05 00:50:51.617824 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.01s 2026-02-05 00:50:51.617831 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.96s 2026-02-05 00:50:51.617837 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.93s 2026-02-05 00:50:51.617844 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.80s 2026-02-05 00:50:51.617851 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.79s 2026-02-05 00:50:51.617858 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.70s 2026-02-05 00:50:51.617864 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.40s 2026-02-05 00:50:51.617871 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.15s 2026-02-05 00:50:51.617878 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.11s 2026-02-05 00:50:51.617885 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.03s 2026-02-05 00:50:51.617895 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.02s 2026-02-05 00:50:51.617902 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.02s 2026-02-05 00:50:51.617909 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.85s 2026-02-05 00:50:51.617916 | orchestrator | 2026-02-05 00:50:51 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:51.617923 | orchestrator | 2026-02-05 00:50:51 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:54.650780 | orchestrator | 2026-02-05 00:50:54 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:54.651029 | orchestrator | 2026-02-05 00:50:54 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:54.652850 | orchestrator | 2026-02-05 00:50:54 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:54.652899 | orchestrator | 2026-02-05 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:50:57.684618 | orchestrator | 2026-02-05 00:50:57 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:50:57.685642 | orchestrator | 2026-02-05 00:50:57 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:50:57.687855 | orchestrator | 2026-02-05 00:50:57 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:50:57.687902 | orchestrator | 2026-02-05 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:00.718240 | orchestrator | 2026-02-05 00:51:00 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:00.719812 | orchestrator | 2026-02-05 00:51:00 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:51:00.721400 | orchestrator | 2026-02-05 00:51:00 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:00.721741 | orchestrator | 2026-02-05 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:03.767578 | orchestrator | 2026-02-05 00:51:03 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:03.769019 | orchestrator | 2026-02-05 00:51:03 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:51:03.770904 | orchestrator | 2026-02-05 00:51:03 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:03.770941 | orchestrator | 2026-02-05 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:06.816139 | orchestrator | 2026-02-05 00:51:06 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:06.817806 | orchestrator | 2026-02-05 00:51:06 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:51:06.819923 | orchestrator | 2026-02-05 00:51:06 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:06.820036 | orchestrator | 2026-02-05 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:09.860915 | orchestrator | 2026-02-05 00:51:09 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:09.861084 | orchestrator | 2026-02-05 00:51:09 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:51:09.861860 | orchestrator | 2026-02-05 00:51:09 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:09.861931 | orchestrator | 2026-02-05 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:12.893735 | orchestrator | 2026-02-05 00:51:12 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:12.894518 | orchestrator | 2026-02-05 00:51:12 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:51:12.895402 | orchestrator | 2026-02-05 00:51:12 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:12.895461 | orchestrator | 2026-02-05 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:15.922312 | orchestrator | 2026-02-05 00:51:15 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:15.922671 | orchestrator | 2026-02-05 00:51:15 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:51:15.923556 | orchestrator | 2026-02-05 00:51:15 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:15.923593 | orchestrator | 2026-02-05 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:18.953678 | orchestrator | 2026-02-05 00:51:18 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:18.954970 | orchestrator | 2026-02-05 00:51:18 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:51:18.956443 | orchestrator | 2026-02-05 00:51:18 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:18.956482 | orchestrator | 2026-02-05 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:21.995368 | orchestrator | 2026-02-05 00:51:21 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:21.996862 | orchestrator | 2026-02-05 00:51:21 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:51:21.998355 | orchestrator | 2026-02-05 00:51:21 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:21.998514 | orchestrator | 2026-02-05 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:25.037460 | orchestrator | 2026-02-05 00:51:25 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:25.037544 | orchestrator | 2026-02-05 00:51:25 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:51:25.038348 | orchestrator | 2026-02-05 00:51:25 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:25.038397 | orchestrator | 2026-02-05 00:51:25 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:28.065573 | orchestrator | 2026-02-05 00:51:28 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:28.065648 | orchestrator | 2026-02-05 00:51:28 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:51:28.066223 | orchestrator | 2026-02-05 00:51:28 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:28.066276 | orchestrator | 2026-02-05 00:51:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:31.102675 | orchestrator | 2026-02-05 00:51:31 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:31.104732 | orchestrator | 2026-02-05 00:51:31 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:51:31.106702 | orchestrator | 2026-02-05 00:51:31 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:31.106754 | orchestrator | 2026-02-05 00:51:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:34.150819 | orchestrator | 2026-02-05 00:51:34 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:34.153721 | orchestrator | 2026-02-05 00:51:34 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:51:34.156592 | orchestrator | 2026-02-05 00:51:34 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:34.156719 | orchestrator | 2026-02-05 00:51:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:37.200340 | orchestrator | 2026-02-05 00:51:37 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:37.201764 | orchestrator | 2026-02-05 00:51:37 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:51:37.203511 | orchestrator | 2026-02-05 00:51:37 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:37.203556 | orchestrator | 2026-02-05 00:51:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:40.239050 | orchestrator | 2026-02-05 00:51:40 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:40.240423 | orchestrator | 2026-02-05 00:51:40 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:51:40.242130 | orchestrator | 2026-02-05 00:51:40 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:40.242318 | orchestrator | 2026-02-05 00:51:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:43.270929 | orchestrator | 2026-02-05 00:51:43 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:43.271090 | orchestrator | 2026-02-05 00:51:43 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state STARTED 2026-02-05 00:51:43.271796 | orchestrator | 2026-02-05 00:51:43 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:43.271847 | orchestrator | 2026-02-05 00:51:43 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:46.300870 | orchestrator | 2026-02-05 00:51:46 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:46.304257 | orchestrator | 2026-02-05 00:51:46.304343 | orchestrator | 2026-02-05 00:51:46.304352 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:51:46.304357 | orchestrator | 2026-02-05 00:51:46.304362 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:51:46.304368 | orchestrator | Thursday 05 February 2026 00:49:26 +0000 (0:00:00.185) 0:00:00.185 ***** 2026-02-05 00:51:46.304374 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:51:46.304385 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:51:46.304392 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:51:46.304398 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:51:46.304405 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:51:46.304411 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:51:46.304417 | orchestrator | 2026-02-05 00:51:46.304423 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:51:46.304430 | orchestrator | Thursday 05 February 2026 00:49:27 +0000 (0:00:00.541) 0:00:00.726 ***** 2026-02-05 00:51:46.304514 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-05 00:51:46.304525 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-05 00:51:46.304532 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-05 00:51:46.304538 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-05 00:51:46.304544 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-05 00:51:46.304550 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-05 00:51:46.304557 | orchestrator | 2026-02-05 00:51:46.304564 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-05 00:51:46.304570 | orchestrator | 2026-02-05 00:51:46.304577 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-05 00:51:46.304583 | orchestrator | Thursday 05 February 2026 00:49:28 +0000 (0:00:00.780) 0:00:01.507 ***** 2026-02-05 00:51:46.304605 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:51:46.304614 | orchestrator | 2026-02-05 00:51:46.304620 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-05 00:51:46.304627 | orchestrator | Thursday 05 February 2026 00:49:29 +0000 (0:00:01.110) 0:00:02.617 ***** 2026-02-05 00:51:46.304636 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304645 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304670 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304858 | orchestrator | 2026-02-05 00:51:46.304872 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-05 00:51:46.304876 | orchestrator | Thursday 05 February 2026 00:49:30 +0000 (0:00:01.285) 0:00:03.903 ***** 2026-02-05 00:51:46.304880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304885 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304897 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304916 | orchestrator | 2026-02-05 00:51:46.304919 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-05 00:51:46.304923 | orchestrator | Thursday 05 February 2026 00:49:32 +0000 (0:00:01.627) 0:00:05.530 ***** 2026-02-05 00:51:46.304927 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304931 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304945 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304964 | orchestrator | 2026-02-05 00:51:46.304968 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-05 00:51:46.304976 | orchestrator | Thursday 05 February 2026 00:49:33 +0000 (0:00:01.071) 0:00:06.602 ***** 2026-02-05 00:51:46.304980 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304986 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.304992 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.305017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.305026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.305032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.305038 | orchestrator | 2026-02-05 00:51:46.305047 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-05 00:51:46.305053 | orchestrator | Thursday 05 February 2026 00:49:34 +0000 (0:00:01.392) 0:00:07.994 ***** 2026-02-05 00:51:46.305059 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.305065 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.305071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.305082 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.305089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.305157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.305170 | orchestrator | 2026-02-05 00:51:46.305174 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-05 00:51:46.305178 | orchestrator | Thursday 05 February 2026 00:49:36 +0000 (0:00:01.472) 0:00:09.467 ***** 2026-02-05 00:51:46.305182 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:51:46.305187 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:51:46.305190 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:51:46.305194 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:51:46.305198 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:51:46.305217 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:51:46.305221 | orchestrator | 2026-02-05 00:51:46.305225 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-05 00:51:46.305229 | orchestrator | Thursday 05 February 2026 00:49:38 +0000 (0:00:02.537) 0:00:12.005 ***** 2026-02-05 00:51:46.305233 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-05 00:51:46.305238 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-05 00:51:46.305242 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-05 00:51:46.305246 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-05 00:51:46.305250 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-05 00:51:46.305254 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-05 00:51:46.305257 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 00:51:46.305261 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 00:51:46.305269 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 00:51:46.305273 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 00:51:46.305277 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 00:51:46.305281 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-05 00:51:46.305289 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-05 00:51:46.305295 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-05 00:51:46.305299 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-05 00:51:46.305303 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-05 00:51:46.305307 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-05 00:51:46.305311 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-05 00:51:46.305317 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 00:51:46.305322 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 00:51:46.305326 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 00:51:46.305330 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 00:51:46.305334 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 00:51:46.305337 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-05 00:51:46.305341 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 00:51:46.305345 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 00:51:46.305349 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 00:51:46.305353 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 00:51:46.305356 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 00:51:46.305360 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-05 00:51:46.305364 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 00:51:46.305368 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 00:51:46.305372 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 00:51:46.305443 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 00:51:46.305451 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 00:51:46.305457 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-05 00:51:46.305463 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-05 00:51:46.305470 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-05 00:51:46.305476 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-05 00:51:46.305482 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-05 00:51:46.305489 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-05 00:51:46.305499 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-05 00:51:46.305503 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-05 00:51:46.305509 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-05 00:51:46.305516 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-05 00:51:46.305520 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-05 00:51:46.305524 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-05 00:51:46.305528 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-05 00:51:46.305532 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-05 00:51:46.305536 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-05 00:51:46.305539 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-05 00:51:46.305543 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-05 00:51:46.305547 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-05 00:51:46.305551 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-05 00:51:46.305555 | orchestrator | 2026-02-05 00:51:46.305562 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 00:51:46.305566 | orchestrator | Thursday 05 February 2026 00:49:59 +0000 (0:00:20.664) 0:00:32.670 ***** 2026-02-05 00:51:46.305570 | orchestrator | 2026-02-05 00:51:46.305574 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 00:51:46.305577 | orchestrator | Thursday 05 February 2026 00:49:59 +0000 (0:00:00.192) 0:00:32.862 ***** 2026-02-05 00:51:46.305581 | orchestrator | 2026-02-05 00:51:46.305585 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 00:51:46.305590 | orchestrator | Thursday 05 February 2026 00:49:59 +0000 (0:00:00.066) 0:00:32.928 ***** 2026-02-05 00:51:46.305596 | orchestrator | 2026-02-05 00:51:46.305602 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 00:51:46.305612 | orchestrator | Thursday 05 February 2026 00:49:59 +0000 (0:00:00.075) 0:00:33.004 ***** 2026-02-05 00:51:46.305620 | orchestrator | 2026-02-05 00:51:46.305625 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 00:51:46.305631 | orchestrator | Thursday 05 February 2026 00:49:59 +0000 (0:00:00.061) 0:00:33.065 ***** 2026-02-05 00:51:46.305637 | orchestrator | 2026-02-05 00:51:46.305642 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-05 00:51:46.305648 | orchestrator | Thursday 05 February 2026 00:49:59 +0000 (0:00:00.059) 0:00:33.125 ***** 2026-02-05 00:51:46.305654 | orchestrator | 2026-02-05 00:51:46.305659 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-05 00:51:46.305665 | orchestrator | Thursday 05 February 2026 00:49:59 +0000 (0:00:00.062) 0:00:33.187 ***** 2026-02-05 00:51:46.305671 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:51:46.305677 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:51:46.305683 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:51:46.305695 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:51:46.305702 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:51:46.305708 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:51:46.305714 | orchestrator | 2026-02-05 00:51:46.305721 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-05 00:51:46.305725 | orchestrator | Thursday 05 February 2026 00:50:01 +0000 (0:00:01.557) 0:00:34.745 ***** 2026-02-05 00:51:46.305728 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:51:46.305733 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:51:46.305739 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:51:46.305746 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:51:46.305756 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:51:46.305762 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:51:46.305767 | orchestrator | 2026-02-05 00:51:46.305773 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-05 00:51:46.305778 | orchestrator | 2026-02-05 00:51:46.305784 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-05 00:51:46.305790 | orchestrator | Thursday 05 February 2026 00:50:25 +0000 (0:00:23.676) 0:00:58.422 ***** 2026-02-05 00:51:46.305796 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:51:46.305802 | orchestrator | 2026-02-05 00:51:46.305809 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-05 00:51:46.305814 | orchestrator | Thursday 05 February 2026 00:50:25 +0000 (0:00:00.623) 0:00:59.045 ***** 2026-02-05 00:51:46.305819 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:51:46.305822 | orchestrator | 2026-02-05 00:51:46.305826 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-05 00:51:46.305830 | orchestrator | Thursday 05 February 2026 00:50:26 +0000 (0:00:00.478) 0:00:59.523 ***** 2026-02-05 00:51:46.305834 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:51:46.305838 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:51:46.305842 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:51:46.305848 | orchestrator | 2026-02-05 00:51:46.305854 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-05 00:51:46.305859 | orchestrator | Thursday 05 February 2026 00:50:27 +0000 (0:00:01.015) 0:01:00.539 ***** 2026-02-05 00:51:46.305865 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:51:46.305871 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:51:46.305878 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:51:46.305890 | orchestrator | 2026-02-05 00:51:46.305897 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-05 00:51:46.305903 | orchestrator | Thursday 05 February 2026 00:50:27 +0000 (0:00:00.291) 0:01:00.831 ***** 2026-02-05 00:51:46.305911 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:51:46.305915 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:51:46.305919 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:51:46.305922 | orchestrator | 2026-02-05 00:51:46.305928 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-05 00:51:46.305934 | orchestrator | Thursday 05 February 2026 00:50:27 +0000 (0:00:00.326) 0:01:01.157 ***** 2026-02-05 00:51:46.305939 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:51:46.305946 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:51:46.305953 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:51:46.305960 | orchestrator | 2026-02-05 00:51:46.305968 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-05 00:51:46.305972 | orchestrator | Thursday 05 February 2026 00:50:28 +0000 (0:00:00.252) 0:01:01.410 ***** 2026-02-05 00:51:46.305976 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:51:46.305980 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:51:46.305983 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:51:46.305987 | orchestrator | 2026-02-05 00:51:46.305991 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-05 00:51:46.305994 | orchestrator | Thursday 05 February 2026 00:50:28 +0000 (0:00:00.363) 0:01:01.773 ***** 2026-02-05 00:51:46.306085 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.306095 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.306102 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.306108 | orchestrator | 2026-02-05 00:51:46.306115 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-05 00:51:46.306121 | orchestrator | Thursday 05 February 2026 00:50:28 +0000 (0:00:00.233) 0:01:02.006 ***** 2026-02-05 00:51:46.306127 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.306138 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.306146 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.306152 | orchestrator | 2026-02-05 00:51:46.306159 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-05 00:51:46.306166 | orchestrator | Thursday 05 February 2026 00:50:28 +0000 (0:00:00.236) 0:01:02.243 ***** 2026-02-05 00:51:46.306172 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.306178 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.306183 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.306188 | orchestrator | 2026-02-05 00:51:46.306193 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-05 00:51:46.306197 | orchestrator | Thursday 05 February 2026 00:50:29 +0000 (0:00:00.266) 0:01:02.510 ***** 2026-02-05 00:51:46.306201 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.306206 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.306210 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.306215 | orchestrator | 2026-02-05 00:51:46.306219 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-05 00:51:46.306224 | orchestrator | Thursday 05 February 2026 00:50:29 +0000 (0:00:00.273) 0:01:02.784 ***** 2026-02-05 00:51:46.306228 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.306232 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.306237 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.306241 | orchestrator | 2026-02-05 00:51:46.306246 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-05 00:51:46.306250 | orchestrator | Thursday 05 February 2026 00:50:29 +0000 (0:00:00.416) 0:01:03.200 ***** 2026-02-05 00:51:46.306255 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.306259 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.306263 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.306268 | orchestrator | 2026-02-05 00:51:46.306272 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-05 00:51:46.306277 | orchestrator | Thursday 05 February 2026 00:50:30 +0000 (0:00:00.242) 0:01:03.443 ***** 2026-02-05 00:51:46.306281 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.306285 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.306289 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.306294 | orchestrator | 2026-02-05 00:51:46.306298 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-05 00:51:46.306302 | orchestrator | Thursday 05 February 2026 00:50:30 +0000 (0:00:00.318) 0:01:03.761 ***** 2026-02-05 00:51:46.306307 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.306312 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.306316 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.306320 | orchestrator | 2026-02-05 00:51:46.306325 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-05 00:51:46.306329 | orchestrator | Thursday 05 February 2026 00:50:30 +0000 (0:00:00.288) 0:01:04.049 ***** 2026-02-05 00:51:46.306334 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.306338 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.306342 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.306347 | orchestrator | 2026-02-05 00:51:46.306351 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-05 00:51:46.306355 | orchestrator | Thursday 05 February 2026 00:50:31 +0000 (0:00:00.537) 0:01:04.587 ***** 2026-02-05 00:51:46.306364 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.306368 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.306372 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.306377 | orchestrator | 2026-02-05 00:51:46.306382 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-05 00:51:46.306389 | orchestrator | Thursday 05 February 2026 00:50:31 +0000 (0:00:00.415) 0:01:05.002 ***** 2026-02-05 00:51:46.306395 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.306402 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.306407 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.306414 | orchestrator | 2026-02-05 00:51:46.306420 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-05 00:51:46.306425 | orchestrator | Thursday 05 February 2026 00:50:32 +0000 (0:00:00.336) 0:01:05.339 ***** 2026-02-05 00:51:46.306431 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.306437 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.306450 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.306456 | orchestrator | 2026-02-05 00:51:46.306463 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-05 00:51:46.306469 | orchestrator | Thursday 05 February 2026 00:50:32 +0000 (0:00:00.304) 0:01:05.643 ***** 2026-02-05 00:51:46.306475 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:51:46.306482 | orchestrator | 2026-02-05 00:51:46.306486 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-05 00:51:46.306490 | orchestrator | Thursday 05 February 2026 00:50:33 +0000 (0:00:00.735) 0:01:06.379 ***** 2026-02-05 00:51:46.306494 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:51:46.306498 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:51:46.306502 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:51:46.306505 | orchestrator | 2026-02-05 00:51:46.306509 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-05 00:51:46.306513 | orchestrator | Thursday 05 February 2026 00:50:33 +0000 (0:00:00.428) 0:01:06.807 ***** 2026-02-05 00:51:46.306517 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:51:46.306520 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:51:46.306524 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:51:46.306528 | orchestrator | 2026-02-05 00:51:46.306532 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-05 00:51:46.306535 | orchestrator | Thursday 05 February 2026 00:50:34 +0000 (0:00:00.456) 0:01:07.263 ***** 2026-02-05 00:51:46.306539 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.306543 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.306547 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.306551 | orchestrator | 2026-02-05 00:51:46.306554 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-05 00:51:46.306558 | orchestrator | Thursday 05 February 2026 00:50:34 +0000 (0:00:00.513) 0:01:07.777 ***** 2026-02-05 00:51:46.306565 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.306569 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.306573 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.306576 | orchestrator | 2026-02-05 00:51:46.306580 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-05 00:51:46.306584 | orchestrator | Thursday 05 February 2026 00:50:34 +0000 (0:00:00.291) 0:01:08.068 ***** 2026-02-05 00:51:46.306588 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.306592 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.306596 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.306599 | orchestrator | 2026-02-05 00:51:46.306603 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-05 00:51:46.306607 | orchestrator | Thursday 05 February 2026 00:50:35 +0000 (0:00:00.291) 0:01:08.360 ***** 2026-02-05 00:51:46.306611 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.306619 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.306623 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.306627 | orchestrator | 2026-02-05 00:51:46.306630 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-05 00:51:46.306634 | orchestrator | Thursday 05 February 2026 00:50:35 +0000 (0:00:00.301) 0:01:08.662 ***** 2026-02-05 00:51:46.306638 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.306643 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.306650 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.306659 | orchestrator | 2026-02-05 00:51:46.306667 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-05 00:51:46.306673 | orchestrator | Thursday 05 February 2026 00:50:35 +0000 (0:00:00.275) 0:01:08.937 ***** 2026-02-05 00:51:46.306680 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.306686 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.306691 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.306696 | orchestrator | 2026-02-05 00:51:46.306702 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-05 00:51:46.306707 | orchestrator | Thursday 05 February 2026 00:50:36 +0000 (0:00:00.413) 0:01:09.351 ***** 2026-02-05 00:51:46.306715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla2026-02-05 00:51:46 | INFO  | Task a73c45d6-3f98-402d-a10b-9e6f436835cd is in state SUCCESS 2026-02-05 00:51:46.306753 | orchestrator | _logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306797 | orchestrator | 2026-02-05 00:51:46.306801 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-05 00:51:46.306805 | orchestrator | Thursday 05 February 2026 00:50:37 +0000 (0:00:01.586) 0:01:10.938 ***** 2026-02-05 00:51:46.306809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306874 | orchestrator | 2026-02-05 00:51:46.306880 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-05 00:51:46.306886 | orchestrator | Thursday 05 February 2026 00:50:41 +0000 (0:00:03.707) 0:01:14.646 ***** 2026-02-05 00:51:46.306891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.306961 | orchestrator | 2026-02-05 00:51:46.306967 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 00:51:46.306972 | orchestrator | Thursday 05 February 2026 00:50:43 +0000 (0:00:02.078) 0:01:16.724 ***** 2026-02-05 00:51:46.306976 | orchestrator | 2026-02-05 00:51:46.306980 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 00:51:46.306984 | orchestrator | Thursday 05 February 2026 00:50:43 +0000 (0:00:00.298) 0:01:17.023 ***** 2026-02-05 00:51:46.306988 | orchestrator | 2026-02-05 00:51:46.306992 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 00:51:46.306995 | orchestrator | Thursday 05 February 2026 00:50:43 +0000 (0:00:00.073) 0:01:17.097 ***** 2026-02-05 00:51:46.307038 | orchestrator | 2026-02-05 00:51:46.307044 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-05 00:51:46.307048 | orchestrator | Thursday 05 February 2026 00:50:43 +0000 (0:00:00.081) 0:01:17.179 ***** 2026-02-05 00:51:46.307052 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:51:46.307055 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:51:46.307059 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:51:46.307063 | orchestrator | 2026-02-05 00:51:46.307067 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-05 00:51:46.307071 | orchestrator | Thursday 05 February 2026 00:50:51 +0000 (0:00:07.648) 0:01:24.827 ***** 2026-02-05 00:51:46.307074 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:51:46.307078 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:51:46.307082 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:51:46.307085 | orchestrator | 2026-02-05 00:51:46.307089 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-05 00:51:46.307093 | orchestrator | Thursday 05 February 2026 00:50:59 +0000 (0:00:07.764) 0:01:32.592 ***** 2026-02-05 00:51:46.307097 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:51:46.307100 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:51:46.307104 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:51:46.307108 | orchestrator | 2026-02-05 00:51:46.307111 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-05 00:51:46.307115 | orchestrator | Thursday 05 February 2026 00:51:06 +0000 (0:00:07.210) 0:01:39.802 ***** 2026-02-05 00:51:46.307119 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.307123 | orchestrator | 2026-02-05 00:51:46.307126 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-05 00:51:46.307130 | orchestrator | Thursday 05 February 2026 00:51:06 +0000 (0:00:00.117) 0:01:39.920 ***** 2026-02-05 00:51:46.307142 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:51:46.307146 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:51:46.307149 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:51:46.307153 | orchestrator | 2026-02-05 00:51:46.307157 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-05 00:51:46.307167 | orchestrator | Thursday 05 February 2026 00:51:07 +0000 (0:00:00.884) 0:01:40.805 ***** 2026-02-05 00:51:46.307175 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.307179 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.307182 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:51:46.307186 | orchestrator | 2026-02-05 00:51:46.307190 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-05 00:51:46.307194 | orchestrator | Thursday 05 February 2026 00:51:08 +0000 (0:00:00.669) 0:01:41.475 ***** 2026-02-05 00:51:46.307197 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:51:46.307201 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:51:46.307205 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:51:46.307210 | orchestrator | 2026-02-05 00:51:46.307217 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-05 00:51:46.307223 | orchestrator | Thursday 05 February 2026 00:51:08 +0000 (0:00:00.753) 0:01:42.228 ***** 2026-02-05 00:51:46.307229 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.307235 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.307241 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:51:46.307248 | orchestrator | 2026-02-05 00:51:46.307254 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-05 00:51:46.307265 | orchestrator | Thursday 05 February 2026 00:51:09 +0000 (0:00:00.631) 0:01:42.860 ***** 2026-02-05 00:51:46.307272 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:51:46.307278 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:51:46.307284 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:51:46.307290 | orchestrator | 2026-02-05 00:51:46.307294 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-05 00:51:46.307298 | orchestrator | Thursday 05 February 2026 00:51:10 +0000 (0:00:01.200) 0:01:44.060 ***** 2026-02-05 00:51:46.307301 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:51:46.307305 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:51:46.307309 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:51:46.307313 | orchestrator | 2026-02-05 00:51:46.307317 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-05 00:51:46.307320 | orchestrator | Thursday 05 February 2026 00:51:11 +0000 (0:00:00.776) 0:01:44.837 ***** 2026-02-05 00:51:46.307324 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:51:46.307328 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:51:46.307332 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:51:46.307336 | orchestrator | 2026-02-05 00:51:46.307340 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-05 00:51:46.307343 | orchestrator | Thursday 05 February 2026 00:51:11 +0000 (0:00:00.315) 0:01:45.152 ***** 2026-02-05 00:51:46.307347 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307355 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307359 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307363 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307373 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307377 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307381 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307385 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307393 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307397 | orchestrator | 2026-02-05 00:51:46.307401 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-05 00:51:46.307406 | orchestrator | Thursday 05 February 2026 00:51:13 +0000 (0:00:01.679) 0:01:46.831 ***** 2026-02-05 00:51:46.307412 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307418 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307427 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307433 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307458 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307472 | orchestrator | 2026-02-05 00:51:46.307479 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-05 00:51:46.307485 | orchestrator | Thursday 05 February 2026 00:51:17 +0000 (0:00:04.117) 0:01:50.949 ***** 2026-02-05 00:51:46.307496 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307502 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307510 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307521 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307541 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 00:51:46.307547 | orchestrator | 2026-02-05 00:51:46.307553 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 00:51:46.307559 | orchestrator | Thursday 05 February 2026 00:51:20 +0000 (0:00:02.808) 0:01:53.758 ***** 2026-02-05 00:51:46.307565 | orchestrator | 2026-02-05 00:51:46.307571 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 00:51:46.307583 | orchestrator | Thursday 05 February 2026 00:51:20 +0000 (0:00:00.061) 0:01:53.820 ***** 2026-02-05 00:51:46.307590 | orchestrator | 2026-02-05 00:51:46.307596 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-05 00:51:46.307602 | orchestrator | Thursday 05 February 2026 00:51:20 +0000 (0:00:00.061) 0:01:53.881 ***** 2026-02-05 00:51:46.307608 | orchestrator | 2026-02-05 00:51:46.307616 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-05 00:51:46.307623 | orchestrator | Thursday 05 February 2026 00:51:20 +0000 (0:00:00.059) 0:01:53.941 ***** 2026-02-05 00:51:46.307627 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:51:46.307630 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:51:46.307634 | orchestrator | 2026-02-05 00:51:46.307638 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-05 00:51:46.307642 | orchestrator | Thursday 05 February 2026 00:51:26 +0000 (0:00:06.129) 0:02:00.071 ***** 2026-02-05 00:51:46.307646 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:51:46.307649 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:51:46.307653 | orchestrator | 2026-02-05 00:51:46.307657 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-05 00:51:46.307661 | orchestrator | Thursday 05 February 2026 00:51:33 +0000 (0:00:06.329) 0:02:06.401 ***** 2026-02-05 00:51:46.307664 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:51:46.307669 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:51:46.307672 | orchestrator | 2026-02-05 00:51:46.307676 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-05 00:51:46.307684 | orchestrator | Thursday 05 February 2026 00:51:39 +0000 (0:00:06.432) 0:02:12.833 ***** 2026-02-05 00:51:46.307688 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:51:46.307692 | orchestrator | 2026-02-05 00:51:46.307696 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-05 00:51:46.307699 | orchestrator | Thursday 05 February 2026 00:51:39 +0000 (0:00:00.242) 0:02:13.075 ***** 2026-02-05 00:51:46.307703 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:51:46.307707 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:51:46.307711 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:51:46.307715 | orchestrator | 2026-02-05 00:51:46.307718 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-05 00:51:46.307722 | orchestrator | Thursday 05 February 2026 00:51:40 +0000 (0:00:00.769) 0:02:13.845 ***** 2026-02-05 00:51:46.307726 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.307730 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.307743 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:51:46.307748 | orchestrator | 2026-02-05 00:51:46.307751 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-05 00:51:46.307755 | orchestrator | Thursday 05 February 2026 00:51:41 +0000 (0:00:00.650) 0:02:14.495 ***** 2026-02-05 00:51:46.307759 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:51:46.307763 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:51:46.307767 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:51:46.307776 | orchestrator | 2026-02-05 00:51:46.307780 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-05 00:51:46.307783 | orchestrator | Thursday 05 February 2026 00:51:41 +0000 (0:00:00.748) 0:02:15.243 ***** 2026-02-05 00:51:46.307787 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:51:46.307791 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:51:46.307795 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:51:46.307798 | orchestrator | 2026-02-05 00:51:46.307802 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-05 00:51:46.307806 | orchestrator | Thursday 05 February 2026 00:51:42 +0000 (0:00:00.802) 0:02:16.046 ***** 2026-02-05 00:51:46.307810 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:51:46.307814 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:51:46.307818 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:51:46.307821 | orchestrator | 2026-02-05 00:51:46.307825 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-05 00:51:46.307829 | orchestrator | Thursday 05 February 2026 00:51:43 +0000 (0:00:00.719) 0:02:16.765 ***** 2026-02-05 00:51:46.307833 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:51:46.307837 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:51:46.307841 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:51:46.307845 | orchestrator | 2026-02-05 00:51:46.307849 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:51:46.307853 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-05 00:51:46.307857 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-05 00:51:46.307861 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-05 00:51:46.307865 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:51:46.307869 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:51:46.307874 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 00:51:46.307885 | orchestrator | 2026-02-05 00:51:46.307892 | orchestrator | 2026-02-05 00:51:46.307898 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:51:46.307904 | orchestrator | Thursday 05 February 2026 00:51:44 +0000 (0:00:00.900) 0:02:17.666 ***** 2026-02-05 00:51:46.307911 | orchestrator | =============================================================================== 2026-02-05 00:51:46.307917 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 23.68s 2026-02-05 00:51:46.307924 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.66s 2026-02-05 00:51:46.307931 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.09s 2026-02-05 00:51:46.307937 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.78s 2026-02-05 00:51:46.307943 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.64s 2026-02-05 00:51:46.307955 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.12s 2026-02-05 00:51:46.307959 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.71s 2026-02-05 00:51:46.307963 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.81s 2026-02-05 00:51:46.307967 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.54s 2026-02-05 00:51:46.307970 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.08s 2026-02-05 00:51:46.307974 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.68s 2026-02-05 00:51:46.307978 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.63s 2026-02-05 00:51:46.307982 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.59s 2026-02-05 00:51:46.307986 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.56s 2026-02-05 00:51:46.307990 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.47s 2026-02-05 00:51:46.307993 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.39s 2026-02-05 00:51:46.307997 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.29s 2026-02-05 00:51:46.308015 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.20s 2026-02-05 00:51:46.308022 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.11s 2026-02-05 00:51:46.308026 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.07s 2026-02-05 00:51:46.308030 | orchestrator | 2026-02-05 00:51:46 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:46.308038 | orchestrator | 2026-02-05 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:49.340583 | orchestrator | 2026-02-05 00:51:49 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:49.342849 | orchestrator | 2026-02-05 00:51:49 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:49.343080 | orchestrator | 2026-02-05 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:52.383681 | orchestrator | 2026-02-05 00:51:52 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:52.383784 | orchestrator | 2026-02-05 00:51:52 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:52.383795 | orchestrator | 2026-02-05 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:55.439732 | orchestrator | 2026-02-05 00:51:55 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:55.443676 | orchestrator | 2026-02-05 00:51:55 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:55.443778 | orchestrator | 2026-02-05 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:51:58.487760 | orchestrator | 2026-02-05 00:51:58 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:51:58.489483 | orchestrator | 2026-02-05 00:51:58 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:51:58.489546 | orchestrator | 2026-02-05 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:01.533564 | orchestrator | 2026-02-05 00:52:01 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:01.534348 | orchestrator | 2026-02-05 00:52:01 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:01.534545 | orchestrator | 2026-02-05 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:04.558906 | orchestrator | 2026-02-05 00:52:04 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:04.560483 | orchestrator | 2026-02-05 00:52:04 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:04.560540 | orchestrator | 2026-02-05 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:07.600568 | orchestrator | 2026-02-05 00:52:07 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:07.602410 | orchestrator | 2026-02-05 00:52:07 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:07.602466 | orchestrator | 2026-02-05 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:10.643335 | orchestrator | 2026-02-05 00:52:10 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:10.644193 | orchestrator | 2026-02-05 00:52:10 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:10.644233 | orchestrator | 2026-02-05 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:13.673920 | orchestrator | 2026-02-05 00:52:13 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:13.674007 | orchestrator | 2026-02-05 00:52:13 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:13.674050 | orchestrator | 2026-02-05 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:16.709253 | orchestrator | 2026-02-05 00:52:16 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:16.710292 | orchestrator | 2026-02-05 00:52:16 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:16.710329 | orchestrator | 2026-02-05 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:19.753523 | orchestrator | 2026-02-05 00:52:19 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:19.754661 | orchestrator | 2026-02-05 00:52:19 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:19.754721 | orchestrator | 2026-02-05 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:22.791972 | orchestrator | 2026-02-05 00:52:22 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:22.792583 | orchestrator | 2026-02-05 00:52:22 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:22.792637 | orchestrator | 2026-02-05 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:25.838094 | orchestrator | 2026-02-05 00:52:25 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:25.838168 | orchestrator | 2026-02-05 00:52:25 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:25.838199 | orchestrator | 2026-02-05 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:28.867828 | orchestrator | 2026-02-05 00:52:28 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:28.869958 | orchestrator | 2026-02-05 00:52:28 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:28.870043 | orchestrator | 2026-02-05 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:31.897654 | orchestrator | 2026-02-05 00:52:31 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:31.897723 | orchestrator | 2026-02-05 00:52:31 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:31.897736 | orchestrator | 2026-02-05 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:34.933724 | orchestrator | 2026-02-05 00:52:34 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:34.934880 | orchestrator | 2026-02-05 00:52:34 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:34.934936 | orchestrator | 2026-02-05 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:37.985264 | orchestrator | 2026-02-05 00:52:37 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:37.987953 | orchestrator | 2026-02-05 00:52:37 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:37.988052 | orchestrator | 2026-02-05 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:41.033216 | orchestrator | 2026-02-05 00:52:41 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:41.036173 | orchestrator | 2026-02-05 00:52:41 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:41.036217 | orchestrator | 2026-02-05 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:44.098185 | orchestrator | 2026-02-05 00:52:44 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:44.100581 | orchestrator | 2026-02-05 00:52:44 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:44.100648 | orchestrator | 2026-02-05 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:47.146349 | orchestrator | 2026-02-05 00:52:47 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:47.147849 | orchestrator | 2026-02-05 00:52:47 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:47.147980 | orchestrator | 2026-02-05 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:50.203657 | orchestrator | 2026-02-05 00:52:50 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:50.205999 | orchestrator | 2026-02-05 00:52:50 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:50.206108 | orchestrator | 2026-02-05 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:53.259158 | orchestrator | 2026-02-05 00:52:53 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:53.260235 | orchestrator | 2026-02-05 00:52:53 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:53.260674 | orchestrator | 2026-02-05 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:56.302195 | orchestrator | 2026-02-05 00:52:56 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:56.305112 | orchestrator | 2026-02-05 00:52:56 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:56.305345 | orchestrator | 2026-02-05 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:52:59.350190 | orchestrator | 2026-02-05 00:52:59 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:52:59.352189 | orchestrator | 2026-02-05 00:52:59 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:52:59.352351 | orchestrator | 2026-02-05 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:02.388171 | orchestrator | 2026-02-05 00:53:02 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:02.390068 | orchestrator | 2026-02-05 00:53:02 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:02.390129 | orchestrator | 2026-02-05 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:05.434503 | orchestrator | 2026-02-05 00:53:05 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:05.435830 | orchestrator | 2026-02-05 00:53:05 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:05.435896 | orchestrator | 2026-02-05 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:08.475739 | orchestrator | 2026-02-05 00:53:08 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:08.475948 | orchestrator | 2026-02-05 00:53:08 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:08.475970 | orchestrator | 2026-02-05 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:11.511214 | orchestrator | 2026-02-05 00:53:11 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:11.511353 | orchestrator | 2026-02-05 00:53:11 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:11.511984 | orchestrator | 2026-02-05 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:14.547415 | orchestrator | 2026-02-05 00:53:14 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:14.550611 | orchestrator | 2026-02-05 00:53:14 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:14.552955 | orchestrator | 2026-02-05 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:17.592663 | orchestrator | 2026-02-05 00:53:17 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:17.594192 | orchestrator | 2026-02-05 00:53:17 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:17.594233 | orchestrator | 2026-02-05 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:20.642189 | orchestrator | 2026-02-05 00:53:20 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:20.643257 | orchestrator | 2026-02-05 00:53:20 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:20.643404 | orchestrator | 2026-02-05 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:23.677862 | orchestrator | 2026-02-05 00:53:23 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:23.677942 | orchestrator | 2026-02-05 00:53:23 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:23.677949 | orchestrator | 2026-02-05 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:26.716143 | orchestrator | 2026-02-05 00:53:26 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:26.717968 | orchestrator | 2026-02-05 00:53:26 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:26.718078 | orchestrator | 2026-02-05 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:29.748329 | orchestrator | 2026-02-05 00:53:29 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:29.750697 | orchestrator | 2026-02-05 00:53:29 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:29.750771 | orchestrator | 2026-02-05 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:32.792398 | orchestrator | 2026-02-05 00:53:32 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:32.793998 | orchestrator | 2026-02-05 00:53:32 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:32.794110 | orchestrator | 2026-02-05 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:35.847292 | orchestrator | 2026-02-05 00:53:35 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:35.848882 | orchestrator | 2026-02-05 00:53:35 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:35.848939 | orchestrator | 2026-02-05 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:38.897406 | orchestrator | 2026-02-05 00:53:38 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:38.898999 | orchestrator | 2026-02-05 00:53:38 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:38.899057 | orchestrator | 2026-02-05 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:41.934878 | orchestrator | 2026-02-05 00:53:41 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:41.938205 | orchestrator | 2026-02-05 00:53:41 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:41.938260 | orchestrator | 2026-02-05 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:44.976687 | orchestrator | 2026-02-05 00:53:44 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:44.978966 | orchestrator | 2026-02-05 00:53:44 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:44.979433 | orchestrator | 2026-02-05 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:48.025139 | orchestrator | 2026-02-05 00:53:48 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:48.027246 | orchestrator | 2026-02-05 00:53:48 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:48.027300 | orchestrator | 2026-02-05 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:51.076094 | orchestrator | 2026-02-05 00:53:51 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:51.077784 | orchestrator | 2026-02-05 00:53:51 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:51.077858 | orchestrator | 2026-02-05 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:54.120129 | orchestrator | 2026-02-05 00:53:54 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:54.121377 | orchestrator | 2026-02-05 00:53:54 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:54.121619 | orchestrator | 2026-02-05 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:53:57.170679 | orchestrator | 2026-02-05 00:53:57 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:53:57.170965 | orchestrator | 2026-02-05 00:53:57 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:53:57.170987 | orchestrator | 2026-02-05 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:00.217626 | orchestrator | 2026-02-05 00:54:00 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:00.219332 | orchestrator | 2026-02-05 00:54:00 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:54:00.219401 | orchestrator | 2026-02-05 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:03.260179 | orchestrator | 2026-02-05 00:54:03 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:03.261420 | orchestrator | 2026-02-05 00:54:03 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:54:03.261474 | orchestrator | 2026-02-05 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:06.301829 | orchestrator | 2026-02-05 00:54:06 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:06.302367 | orchestrator | 2026-02-05 00:54:06 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:54:06.302559 | orchestrator | 2026-02-05 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:09.334685 | orchestrator | 2026-02-05 00:54:09 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:09.335482 | orchestrator | 2026-02-05 00:54:09 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:54:09.336115 | orchestrator | 2026-02-05 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:12.367278 | orchestrator | 2026-02-05 00:54:12 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:12.367674 | orchestrator | 2026-02-05 00:54:12 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:54:12.367705 | orchestrator | 2026-02-05 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:15.403202 | orchestrator | 2026-02-05 00:54:15 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:15.403293 | orchestrator | 2026-02-05 00:54:15 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state STARTED 2026-02-05 00:54:15.403658 | orchestrator | 2026-02-05 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:18.440949 | orchestrator | 2026-02-05 00:54:18 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:18.447028 | orchestrator | 2026-02-05 00:54:18 | INFO  | Task 04958015-ddac-444d-a56d-1ff697267161 is in state SUCCESS 2026-02-05 00:54:18.447100 | orchestrator | 2026-02-05 00:54:18.448627 | orchestrator | 2026-02-05 00:54:18.448674 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:54:18.448683 | orchestrator | 2026-02-05 00:54:18.448690 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:54:18.448696 | orchestrator | Thursday 05 February 2026 00:48:16 +0000 (0:00:00.413) 0:00:00.413 ***** 2026-02-05 00:54:18.448702 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:54:18.448710 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:54:18.448716 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:54:18.448828 | orchestrator | 2026-02-05 00:54:18.448835 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:54:18.448864 | orchestrator | Thursday 05 February 2026 00:48:16 +0000 (0:00:00.424) 0:00:00.837 ***** 2026-02-05 00:54:18.448873 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-05 00:54:18.448879 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-05 00:54:18.448885 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-05 00:54:18.448891 | orchestrator | 2026-02-05 00:54:18.448897 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-05 00:54:18.448903 | orchestrator | 2026-02-05 00:54:18.448909 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-05 00:54:18.448914 | orchestrator | Thursday 05 February 2026 00:48:17 +0000 (0:00:00.571) 0:00:01.409 ***** 2026-02-05 00:54:18.448921 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.448927 | orchestrator | 2026-02-05 00:54:18.448933 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-05 00:54:18.448939 | orchestrator | Thursday 05 February 2026 00:48:18 +0000 (0:00:00.783) 0:00:02.192 ***** 2026-02-05 00:54:18.448944 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:54:18.448950 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:54:18.448956 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:54:18.448962 | orchestrator | 2026-02-05 00:54:18.448967 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-05 00:54:18.448973 | orchestrator | Thursday 05 February 2026 00:48:19 +0000 (0:00:00.849) 0:00:03.042 ***** 2026-02-05 00:54:18.448979 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.448985 | orchestrator | 2026-02-05 00:54:18.448991 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-05 00:54:18.448996 | orchestrator | Thursday 05 February 2026 00:48:19 +0000 (0:00:00.837) 0:00:03.880 ***** 2026-02-05 00:54:18.449002 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:54:18.449008 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:54:18.449013 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:54:18.449019 | orchestrator | 2026-02-05 00:54:18.449025 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-05 00:54:18.449031 | orchestrator | Thursday 05 February 2026 00:48:20 +0000 (0:00:00.602) 0:00:04.482 ***** 2026-02-05 00:54:18.449037 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-05 00:54:18.449042 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-05 00:54:18.449049 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-05 00:54:18.449060 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-05 00:54:18.449639 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-05 00:54:18.449647 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-05 00:54:18.449653 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-05 00:54:18.449659 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-05 00:54:18.449665 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-05 00:54:18.449678 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-05 00:54:18.449684 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-05 00:54:18.449690 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-05 00:54:18.449698 | orchestrator | 2026-02-05 00:54:18.449709 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-05 00:54:18.450956 | orchestrator | Thursday 05 February 2026 00:48:23 +0000 (0:00:03.224) 0:00:07.707 ***** 2026-02-05 00:54:18.450989 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-05 00:54:18.450999 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-05 00:54:18.451009 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-05 00:54:18.451019 | orchestrator | 2026-02-05 00:54:18.451028 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-05 00:54:18.451038 | orchestrator | Thursday 05 February 2026 00:48:24 +0000 (0:00:00.971) 0:00:08.679 ***** 2026-02-05 00:54:18.451056 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-05 00:54:18.451067 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-05 00:54:18.451076 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-05 00:54:18.451083 | orchestrator | 2026-02-05 00:54:18.451444 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-05 00:54:18.451459 | orchestrator | Thursday 05 February 2026 00:48:26 +0000 (0:00:01.695) 0:00:10.374 ***** 2026-02-05 00:54:18.451469 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-05 00:54:18.451616 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.451651 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-05 00:54:18.451662 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.451672 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-05 00:54:18.451681 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.451690 | orchestrator | 2026-02-05 00:54:18.451700 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-05 00:54:18.451706 | orchestrator | Thursday 05 February 2026 00:48:27 +0000 (0:00:00.984) 0:00:11.359 ***** 2026-02-05 00:54:18.451715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 00:54:18.451767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 00:54:18.451775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 00:54:18.451782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:54:18.451801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:54:18.451820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:54:18.451828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:54:18.451835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:54:18.451841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:54:18.451847 | orchestrator | 2026-02-05 00:54:18.451853 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-05 00:54:18.451859 | orchestrator | Thursday 05 February 2026 00:48:30 +0000 (0:00:02.682) 0:00:14.041 ***** 2026-02-05 00:54:18.451865 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.451871 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.451877 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.451897 | orchestrator | 2026-02-05 00:54:18.451903 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-05 00:54:18.451909 | orchestrator | Thursday 05 February 2026 00:48:31 +0000 (0:00:01.539) 0:00:15.586 ***** 2026-02-05 00:54:18.451920 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-05 00:54:18.451931 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-05 00:54:18.451938 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-05 00:54:18.451944 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-05 00:54:18.451949 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-05 00:54:18.451955 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-05 00:54:18.451961 | orchestrator | 2026-02-05 00:54:18.451972 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-05 00:54:18.451980 | orchestrator | Thursday 05 February 2026 00:48:35 +0000 (0:00:04.317) 0:00:19.903 ***** 2026-02-05 00:54:18.451990 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.452000 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.452009 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.452018 | orchestrator | 2026-02-05 00:54:18.452027 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-05 00:54:18.452035 | orchestrator | Thursday 05 February 2026 00:48:37 +0000 (0:00:01.489) 0:00:21.393 ***** 2026-02-05 00:54:18.452041 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:54:18.452047 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:54:18.452053 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:54:18.452060 | orchestrator | 2026-02-05 00:54:18.452067 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-05 00:54:18.452074 | orchestrator | Thursday 05 February 2026 00:48:39 +0000 (0:00:02.111) 0:00:23.504 ***** 2026-02-05 00:54:18.452081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.452100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.452108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.452117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1ad12bf622d2addce3c6eafcf5744b74a319ca0d', '__omit_place_holder__1ad12bf622d2addce3c6eafcf5744b74a319ca0d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 00:54:18.452129 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.452136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.452143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.452150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.452160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.452175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1ad12bf622d2addce3c6eafcf5744b74a319ca0d', '__omit_place_holder__1ad12bf622d2addce3c6eafcf5744b74a319ca0d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 00:54:18.452182 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.452190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.452202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.452209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1ad12bf622d2addce3c6eafcf5744b74a319ca0d', '__omit_place_holder__1ad12bf622d2addce3c6eafcf5744b74a319ca0d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 00:54:18.452216 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.452223 | orchestrator | 2026-02-05 00:54:18.452230 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-05 00:54:18.452237 | orchestrator | Thursday 05 February 2026 00:48:40 +0000 (0:00:01.283) 0:00:24.788 ***** 2026-02-05 00:54:18.452244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 00:54:18.452255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 00:54:18.452267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 00:54:18.452274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:54:18.452286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.452293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1ad12bf622d2addce3c6eafcf5744b74a319ca0d', '__omit_place_holder__1ad12bf622d2addce3c6eafcf5744b74a319ca0d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 00:54:18.452301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:54:18.452307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.452317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1ad12bf622d2addce3c6eafcf5744b74a319ca0d', '__omit_place_holder__1ad12bf622d2addce3c6eafcf5744b74a319ca0d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 00:54:18.452329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:54:18.452336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.452351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1ad12bf622d2addce3c6eafcf5744b74a319ca0d', '__omit_place_holder__1ad12bf622d2addce3c6eafcf5744b74a319ca0d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-05 00:54:18.452358 | orchestrator | 2026-02-05 00:54:18.452365 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-05 00:54:18.452372 | orchestrator | Thursday 05 February 2026 00:48:43 +0000 (0:00:02.842) 0:00:27.631 ***** 2026-02-05 00:54:18.452379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 00:54:18.452387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 00:54:18.452397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 00:54:18.452410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:54:18.452417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:54:18.452428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:54:18.452434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:54:18.452440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:54:18.452446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:54:18.452452 | orchestrator | 2026-02-05 00:54:18.452458 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-05 00:54:18.452466 | orchestrator | Thursday 05 February 2026 00:48:46 +0000 (0:00:02.746) 0:00:30.377 ***** 2026-02-05 00:54:18.452628 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-05 00:54:18.452639 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-05 00:54:18.452645 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-05 00:54:18.452651 | orchestrator | 2026-02-05 00:54:18.452657 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-05 00:54:18.452670 | orchestrator | Thursday 05 February 2026 00:48:49 +0000 (0:00:03.084) 0:00:33.462 ***** 2026-02-05 00:54:18.452676 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-05 00:54:18.452682 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-05 00:54:18.452694 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-05 00:54:18.452699 | orchestrator | 2026-02-05 00:54:18.452710 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-05 00:54:18.452717 | orchestrator | Thursday 05 February 2026 00:48:54 +0000 (0:00:05.261) 0:00:38.724 ***** 2026-02-05 00:54:18.452723 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.452784 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.452791 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.452797 | orchestrator | 2026-02-05 00:54:18.452802 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-05 00:54:18.452808 | orchestrator | Thursday 05 February 2026 00:48:55 +0000 (0:00:00.791) 0:00:39.516 ***** 2026-02-05 00:54:18.452814 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-05 00:54:18.452821 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-05 00:54:18.452827 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-05 00:54:18.452833 | orchestrator | 2026-02-05 00:54:18.452838 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-05 00:54:18.452844 | orchestrator | Thursday 05 February 2026 00:48:57 +0000 (0:00:02.312) 0:00:41.829 ***** 2026-02-05 00:54:18.452850 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-05 00:54:18.452856 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-05 00:54:18.452862 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-05 00:54:18.452868 | orchestrator | 2026-02-05 00:54:18.452873 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-05 00:54:18.452879 | orchestrator | Thursday 05 February 2026 00:49:00 +0000 (0:00:02.874) 0:00:44.704 ***** 2026-02-05 00:54:18.452885 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-05 00:54:18.452891 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-05 00:54:18.452897 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-05 00:54:18.452903 | orchestrator | 2026-02-05 00:54:18.452908 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-05 00:54:18.452914 | orchestrator | Thursday 05 February 2026 00:49:02 +0000 (0:00:01.847) 0:00:46.552 ***** 2026-02-05 00:54:18.452920 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-05 00:54:18.452926 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-05 00:54:18.452941 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-05 00:54:18.452947 | orchestrator | 2026-02-05 00:54:18.452960 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-05 00:54:18.452966 | orchestrator | Thursday 05 February 2026 00:49:04 +0000 (0:00:02.042) 0:00:48.594 ***** 2026-02-05 00:54:18.452972 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.452978 | orchestrator | 2026-02-05 00:54:18.452983 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-05 00:54:18.452989 | orchestrator | Thursday 05 February 2026 00:49:05 +0000 (0:00:01.118) 0:00:49.712 ***** 2026-02-05 00:54:18.452995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 00:54:18.453010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 00:54:18.453021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 00:54:18.453027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:54:18.453034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:54:18.453040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:54:18.453046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:54:18.453056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:54:18.453065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:54:18.453071 | orchestrator | 2026-02-05 00:54:18.453077 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-05 00:54:18.453083 | orchestrator | Thursday 05 February 2026 00:49:09 +0000 (0:00:04.017) 0:00:53.730 ***** 2026-02-05 00:54:18.453094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.453100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.453106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.453112 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.453118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.453124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.453160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.453167 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.453176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.453206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.453214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.453220 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.453226 | orchestrator | 2026-02-05 00:54:18.453232 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-05 00:54:18.453238 | orchestrator | Thursday 05 February 2026 00:49:10 +0000 (0:00:01.043) 0:00:54.773 ***** 2026-02-05 00:54:18.453244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.453250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.453260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.453270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.453280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.453287 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.453331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.453339 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.453345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.453351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.453362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.453368 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.453373 | orchestrator | 2026-02-05 00:54:18.453379 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-05 00:54:18.453385 | orchestrator | Thursday 05 February 2026 00:49:11 +0000 (0:00:00.725) 0:00:55.499 ***** 2026-02-05 00:54:18.453391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.453405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.453411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.453545 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.453555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.453562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.453575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.453585 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.453594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.453604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.453624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.453634 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.453643 | orchestrator | 2026-02-05 00:54:18.453652 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-05 00:54:18.453662 | orchestrator | Thursday 05 February 2026 00:49:12 +0000 (0:00:00.999) 0:00:56.498 ***** 2026-02-05 00:54:18.453672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.453682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.453699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.453709 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.453718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.453754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.453774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.453784 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.453801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.453811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.453822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.453828 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.453834 | orchestrator | 2026-02-05 00:54:18.453840 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-05 00:54:18.453845 | orchestrator | Thursday 05 February 2026 00:49:13 +0000 (0:00:00.720) 0:00:57.219 ***** 2026-02-05 00:54:18.453851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.453858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.453864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.453870 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.453885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.453896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.453917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.453928 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.453938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.453948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.453955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.453961 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.453968 | orchestrator | 2026-02-05 00:54:18.454227 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-05 00:54:18.454235 | orchestrator | Thursday 05 February 2026 00:49:14 +0000 (0:00:01.104) 0:00:58.324 ***** 2026-02-05 00:54:18.454243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.454315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.454340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.454355 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.454371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.454381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.454392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.454402 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.454412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.454436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.454447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.454465 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.454475 | orchestrator | 2026-02-05 00:54:18.454486 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-05 00:54:18.454496 | orchestrator | Thursday 05 February 2026 00:49:16 +0000 (0:00:01.951) 0:01:00.276 ***** 2026-02-05 00:54:18.454507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.454519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.454531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.454542 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.454553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.454570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.454598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.454610 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.454621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.454632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.454691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.454702 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.454711 | orchestrator | 2026-02-05 00:54:18.454721 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-05 00:54:18.454793 | orchestrator | Thursday 05 February 2026 00:49:16 +0000 (0:00:00.495) 0:01:00.771 ***** 2026-02-05 00:54:18.454848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.454856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.454879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.454891 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.454910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.454920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.454926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.454933 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.454944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-05 00:54:18.454954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-05 00:54:18.454964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-05 00:54:18.454981 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.454993 | orchestrator | 2026-02-05 00:54:18.455007 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-05 00:54:18.455017 | orchestrator | Thursday 05 February 2026 00:49:17 +0000 (0:00:00.776) 0:01:01.548 ***** 2026-02-05 00:54:18.455024 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-05 00:54:18.455030 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-05 00:54:18.455040 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-05 00:54:18.455045 | orchestrator | 2026-02-05 00:54:18.455051 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-05 00:54:18.455056 | orchestrator | Thursday 05 February 2026 00:49:19 +0000 (0:00:01.858) 0:01:03.406 ***** 2026-02-05 00:54:18.455062 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-05 00:54:18.455067 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-05 00:54:18.455073 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-05 00:54:18.455079 | orchestrator | 2026-02-05 00:54:18.455084 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-05 00:54:18.455090 | orchestrator | Thursday 05 February 2026 00:49:20 +0000 (0:00:01.359) 0:01:04.766 ***** 2026-02-05 00:54:18.455095 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 00:54:18.455101 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 00:54:18.455106 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 00:54:18.455112 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 00:54:18.455117 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.455123 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 00:54:18.455128 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.455133 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 00:54:18.455139 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.455144 | orchestrator | 2026-02-05 00:54:18.455150 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-05 00:54:18.455155 | orchestrator | Thursday 05 February 2026 00:49:21 +0000 (0:00:00.787) 0:01:05.553 ***** 2026-02-05 00:54:18.455161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-05 00:54:18.455167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-05 00:54:18.455177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-05 00:54:18.455190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:54:18.455197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:54:18.455203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-05 00:54:18.455208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:54:18.455215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:54:18.455224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-05 00:54:18.455230 | orchestrator | 2026-02-05 00:54:18.455236 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-05 00:54:18.455241 | orchestrator | Thursday 05 February 2026 00:49:23 +0000 (0:00:02.161) 0:01:07.715 ***** 2026-02-05 00:54:18.455250 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.455259 | orchestrator | 2026-02-05 00:54:18.455268 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-05 00:54:18.455277 | orchestrator | Thursday 05 February 2026 00:49:24 +0000 (0:00:00.725) 0:01:08.440 ***** 2026-02-05 00:54:18.455290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 00:54:18.455303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 00:54:18.455310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 00:54:18.455334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 00:54:18.455343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-05 00:54:18.455366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 00:54:18.455372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455388 | orchestrator | 2026-02-05 00:54:18.455398 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-05 00:54:18.455406 | orchestrator | Thursday 05 February 2026 00:49:28 +0000 (0:00:03.681) 0:01:12.122 ***** 2026-02-05 00:54:18.455419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-05 00:54:18.455436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 00:54:18.455445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455466 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.455472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-05 00:54:18.455477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 00:54:18.455483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455504 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.455519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-05 00:54:18.455528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-05 00:54:18.455544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455562 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.455568 | orchestrator | 2026-02-05 00:54:18.455573 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-05 00:54:18.455579 | orchestrator | Thursday 05 February 2026 00:49:28 +0000 (0:00:00.830) 0:01:12.952 ***** 2026-02-05 00:54:18.455589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-05 00:54:18.455599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-05 00:54:18.455612 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.455625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-05 00:54:18.455633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-05 00:54:18.455641 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.455654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-05 00:54:18.455664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-05 00:54:18.455673 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.455682 | orchestrator | 2026-02-05 00:54:18.455697 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-05 00:54:18.455705 | orchestrator | Thursday 05 February 2026 00:49:29 +0000 (0:00:01.035) 0:01:13.988 ***** 2026-02-05 00:54:18.455711 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.455716 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.455722 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.455749 | orchestrator | 2026-02-05 00:54:18.455755 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-05 00:54:18.455761 | orchestrator | Thursday 05 February 2026 00:49:31 +0000 (0:00:01.334) 0:01:15.322 ***** 2026-02-05 00:54:18.455766 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.455772 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.455783 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.455789 | orchestrator | 2026-02-05 00:54:18.455794 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-05 00:54:18.455800 | orchestrator | Thursday 05 February 2026 00:49:33 +0000 (0:00:01.970) 0:01:17.293 ***** 2026-02-05 00:54:18.455805 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.455811 | orchestrator | 2026-02-05 00:54:18.455818 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-05 00:54:18.455827 | orchestrator | Thursday 05 February 2026 00:49:34 +0000 (0:00:00.789) 0:01:18.082 ***** 2026-02-05 00:54:18.455837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.455848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.455872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.455915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455926 | orchestrator | 2026-02-05 00:54:18.455932 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-05 00:54:18.455938 | orchestrator | Thursday 05 February 2026 00:49:37 +0000 (0:00:03.365) 0:01:21.447 ***** 2026-02-05 00:54:18.455952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.455962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455974 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.455979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.455985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.455994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.456004 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.456014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.456020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.456026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.456031 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.456037 | orchestrator | 2026-02-05 00:54:18.456043 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-05 00:54:18.456048 | orchestrator | Thursday 05 February 2026 00:49:38 +0000 (0:00:00.596) 0:01:22.044 ***** 2026-02-05 00:54:18.456055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-05 00:54:18.456061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-05 00:54:18.456067 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.456073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-05 00:54:18.456081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-05 00:54:18.456090 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.456099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-05 00:54:18.456108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-05 00:54:18.456126 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.456134 | orchestrator | 2026-02-05 00:54:18.456143 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-05 00:54:18.456151 | orchestrator | Thursday 05 February 2026 00:49:38 +0000 (0:00:00.865) 0:01:22.909 ***** 2026-02-05 00:54:18.456160 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.456166 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.456172 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.456177 | orchestrator | 2026-02-05 00:54:18.456183 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-05 00:54:18.456188 | orchestrator | Thursday 05 February 2026 00:49:40 +0000 (0:00:02.084) 0:01:24.993 ***** 2026-02-05 00:54:18.456194 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.456200 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.456206 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.456212 | orchestrator | 2026-02-05 00:54:18.456225 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-05 00:54:18.456235 | orchestrator | Thursday 05 February 2026 00:49:42 +0000 (0:00:01.889) 0:01:26.882 ***** 2026-02-05 00:54:18.456244 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.456254 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.456263 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.456272 | orchestrator | 2026-02-05 00:54:18.456280 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-05 00:54:18.456290 | orchestrator | Thursday 05 February 2026 00:49:43 +0000 (0:00:00.268) 0:01:27.151 ***** 2026-02-05 00:54:18.456298 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.456308 | orchestrator | 2026-02-05 00:54:18.456317 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-05 00:54:18.456326 | orchestrator | Thursday 05 February 2026 00:49:43 +0000 (0:00:00.632) 0:01:27.783 ***** 2026-02-05 00:54:18.456336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-02-05 00:54:18.456344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-02-05 00:54:18.456354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-02-05 00:54:18.456370 | orchestrator | 2026-02-05 00:54:18.456378 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-05 00:54:18.456387 | orchestrator | Thursday 05 February 2026 00:49:46 +0000 (0:00:02.615) 0:01:30.399 ***** 2026-02-05 00:54:18.456406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-02-05 00:54:18.456417 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.456426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-02-05 00:54:18.456436 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.456446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-02-05 00:54:18.456455 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.456463 | orchestrator | 2026-02-05 00:54:18.456474 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-05 00:54:18.456482 | orchestrator | Thursday 05 February 2026 00:49:47 +0000 (0:00:01.370) 0:01:31.769 ***** 2026-02-05 00:54:18.456493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-05 00:54:18.456511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-05 00:54:18.456523 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.456532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-05 00:54:18.456544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-05 00:54:18.456555 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.456571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-05 00:54:18.456581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-02-05 00:54:18.456592 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.456602 | orchestrator | 2026-02-05 00:54:18.456610 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-05 00:54:18.456620 | orchestrator | Thursday 05 February 2026 00:49:49 +0000 (0:00:01.938) 0:01:33.708 ***** 2026-02-05 00:54:18.456629 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.456639 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.456648 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.456658 | orchestrator | 2026-02-05 00:54:18.456667 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-05 00:54:18.456677 | orchestrator | Thursday 05 February 2026 00:49:50 +0000 (0:00:00.575) 0:01:34.283 ***** 2026-02-05 00:54:18.456686 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.456695 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.456705 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.456711 | orchestrator | 2026-02-05 00:54:18.456717 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-05 00:54:18.456722 | orchestrator | Thursday 05 February 2026 00:49:51 +0000 (0:00:01.100) 0:01:35.384 ***** 2026-02-05 00:54:18.456747 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.456758 | orchestrator | 2026-02-05 00:54:18.456763 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-05 00:54:18.456769 | orchestrator | Thursday 05 February 2026 00:49:52 +0000 (0:00:00.711) 0:01:36.096 ***** 2026-02-05 00:54:18.456775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.456781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.456792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.456802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.456809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.456814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.456825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.456833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.456851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.456862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.456872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.456888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.456898 | orchestrator | 2026-02-05 00:54:18.456907 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-05 00:54:18.456916 | orchestrator | Thursday 05 February 2026 00:49:55 +0000 (0:00:03.571) 0:01:39.667 ***** 2026-02-05 00:54:18.456926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.456936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.456952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.456961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.456976 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.456987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.456997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457093 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.457111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.457128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457157 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.457167 | orchestrator | 2026-02-05 00:54:18.457172 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-05 00:54:18.457180 | orchestrator | Thursday 05 February 2026 00:49:56 +0000 (0:00:01.259) 0:01:40.926 ***** 2026-02-05 00:54:18.457189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-05 00:54:18.457199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-05 00:54:18.457209 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.457218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-05 00:54:18.457229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-05 00:54:18.457235 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.457240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-05 00:54:18.457250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-05 00:54:18.457265 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.457274 | orchestrator | 2026-02-05 00:54:18.457283 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-05 00:54:18.457291 | orchestrator | Thursday 05 February 2026 00:49:57 +0000 (0:00:00.840) 0:01:41.767 ***** 2026-02-05 00:54:18.457299 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.457307 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.457316 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.457325 | orchestrator | 2026-02-05 00:54:18.457333 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-05 00:54:18.457342 | orchestrator | Thursday 05 February 2026 00:49:58 +0000 (0:00:01.191) 0:01:42.958 ***** 2026-02-05 00:54:18.457352 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.457362 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.457369 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.457375 | orchestrator | 2026-02-05 00:54:18.457380 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-05 00:54:18.457386 | orchestrator | Thursday 05 February 2026 00:50:00 +0000 (0:00:01.898) 0:01:44.857 ***** 2026-02-05 00:54:18.457391 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.457397 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.457402 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.457408 | orchestrator | 2026-02-05 00:54:18.457413 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-05 00:54:18.457419 | orchestrator | Thursday 05 February 2026 00:50:01 +0000 (0:00:00.294) 0:01:45.152 ***** 2026-02-05 00:54:18.457424 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.457430 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.457435 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.457440 | orchestrator | 2026-02-05 00:54:18.457446 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-05 00:54:18.457451 | orchestrator | Thursday 05 February 2026 00:50:01 +0000 (0:00:00.450) 0:01:45.602 ***** 2026-02-05 00:54:18.457457 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.457462 | orchestrator | 2026-02-05 00:54:18.457468 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-05 00:54:18.457473 | orchestrator | Thursday 05 February 2026 00:50:02 +0000 (0:00:00.813) 0:01:46.415 ***** 2026-02-05 00:54:18.457479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 00:54:18.457485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 00:54:18.457515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 00:54:18.457521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 00:54:18.457539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 00:54:18.457632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 00:54:18.457643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457675 | orchestrator | 2026-02-05 00:54:18.457681 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-05 00:54:18.457687 | orchestrator | Thursday 05 February 2026 00:50:07 +0000 (0:00:05.068) 0:01:51.484 ***** 2026-02-05 00:54:18.457696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 00:54:18.457705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 00:54:18.457712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457779 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.457802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 00:54:18.457813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 00:54:18.457823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457869 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.457879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 00:54:18.457886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 00:54:18.457891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.457932 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.457937 | orchestrator | 2026-02-05 00:54:18.457942 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-05 00:54:18.457948 | orchestrator | Thursday 05 February 2026 00:50:08 +0000 (0:00:01.294) 0:01:52.778 ***** 2026-02-05 00:54:18.457954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-05 00:54:18.457960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-05 00:54:18.457967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-05 00:54:18.457973 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.457978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-05 00:54:18.457983 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.457989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-05 00:54:18.457994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-05 00:54:18.458004 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.458009 | orchestrator | 2026-02-05 00:54:18.458044 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-05 00:54:18.458050 | orchestrator | Thursday 05 February 2026 00:50:10 +0000 (0:00:01.263) 0:01:54.042 ***** 2026-02-05 00:54:18.458055 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.458061 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.458067 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.458072 | orchestrator | 2026-02-05 00:54:18.458078 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-05 00:54:18.458083 | orchestrator | Thursday 05 February 2026 00:50:11 +0000 (0:00:01.341) 0:01:55.383 ***** 2026-02-05 00:54:18.458089 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.458094 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.458100 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.458105 | orchestrator | 2026-02-05 00:54:18.458111 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-05 00:54:18.458116 | orchestrator | Thursday 05 February 2026 00:50:13 +0000 (0:00:01.788) 0:01:57.172 ***** 2026-02-05 00:54:18.458122 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.458127 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.458133 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.458138 | orchestrator | 2026-02-05 00:54:18.458144 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-05 00:54:18.458149 | orchestrator | Thursday 05 February 2026 00:50:13 +0000 (0:00:00.400) 0:01:57.572 ***** 2026-02-05 00:54:18.458155 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.458160 | orchestrator | 2026-02-05 00:54:18.458165 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-05 00:54:18.458171 | orchestrator | Thursday 05 February 2026 00:50:14 +0000 (0:00:00.741) 0:01:58.314 ***** 2026-02-05 00:54:18.458193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 00:54:18.458201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 00:54:18.458215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 00:54:18.458226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 00:54:18.458237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 00:54:18.458250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 00:54:18.458261 | orchestrator | 2026-02-05 00:54:18.458267 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-05 00:54:18.458273 | orchestrator | Thursday 05 February 2026 00:50:18 +0000 (0:00:03.883) 0:02:02.197 ***** 2026-02-05 00:54:18.458279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 00:54:18.458292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 00:54:18.458303 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.458309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 00:54:18.458322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 00:54:18.458335 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.458341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 00:54:18.458353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-05 00:54:18.458360 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.458365 | orchestrator | 2026-02-05 00:54:18.458371 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-05 00:54:18.458377 | orchestrator | Thursday 05 February 2026 00:50:22 +0000 (0:00:04.038) 0:02:06.236 ***** 2026-02-05 00:54:18.458387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 00:54:18.458393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 00:54:18.458399 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.458404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 00:54:18.458410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 00:54:18.458416 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.458422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 00:54:18.458428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-05 00:54:18.458433 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.458439 | orchestrator | 2026-02-05 00:54:18.458444 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-05 00:54:18.458465 | orchestrator | Thursday 05 February 2026 00:50:25 +0000 (0:00:03.457) 0:02:09.694 ***** 2026-02-05 00:54:18.458471 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.458476 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.458485 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.458491 | orchestrator | 2026-02-05 00:54:18.458496 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-05 00:54:18.458505 | orchestrator | Thursday 05 February 2026 00:50:27 +0000 (0:00:01.347) 0:02:11.041 ***** 2026-02-05 00:54:18.458511 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.458517 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.458522 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.458528 | orchestrator | 2026-02-05 00:54:18.458533 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-05 00:54:18.458542 | orchestrator | Thursday 05 February 2026 00:50:28 +0000 (0:00:01.843) 0:02:12.884 ***** 2026-02-05 00:54:18.458548 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.458553 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.458559 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.458565 | orchestrator | 2026-02-05 00:54:18.458570 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-05 00:54:18.458576 | orchestrator | Thursday 05 February 2026 00:50:29 +0000 (0:00:00.258) 0:02:13.143 ***** 2026-02-05 00:54:18.458581 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.458587 | orchestrator | 2026-02-05 00:54:18.458592 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-05 00:54:18.458598 | orchestrator | Thursday 05 February 2026 00:50:29 +0000 (0:00:00.875) 0:02:14.019 ***** 2026-02-05 00:54:18.458604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 00:54:18.458610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 00:54:18.458616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 00:54:18.458622 | orchestrator | 2026-02-05 00:54:18.458627 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-05 00:54:18.458633 | orchestrator | Thursday 05 February 2026 00:50:33 +0000 (0:00:03.599) 0:02:17.619 ***** 2026-02-05 00:54:18.458639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 00:54:18.458648 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.458659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 00:54:18.458665 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.458671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 00:54:18.458677 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.458682 | orchestrator | 2026-02-05 00:54:18.458688 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-05 00:54:18.458693 | orchestrator | Thursday 05 February 2026 00:50:34 +0000 (0:00:00.453) 0:02:18.072 ***** 2026-02-05 00:54:18.458699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-05 00:54:18.458705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-05 00:54:18.458710 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.458716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-05 00:54:18.458721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-05 00:54:18.458885 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.458896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-05 00:54:18.458901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-05 00:54:18.458906 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.458911 | orchestrator | 2026-02-05 00:54:18.458916 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-05 00:54:18.458921 | orchestrator | Thursday 05 February 2026 00:50:34 +0000 (0:00:00.863) 0:02:18.936 ***** 2026-02-05 00:54:18.458926 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.458938 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.458943 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.458947 | orchestrator | 2026-02-05 00:54:18.458952 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-05 00:54:18.458957 | orchestrator | Thursday 05 February 2026 00:50:36 +0000 (0:00:01.475) 0:02:20.411 ***** 2026-02-05 00:54:18.458962 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.458967 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.458971 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.458976 | orchestrator | 2026-02-05 00:54:18.458981 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-05 00:54:18.458986 | orchestrator | Thursday 05 February 2026 00:50:38 +0000 (0:00:02.071) 0:02:22.482 ***** 2026-02-05 00:54:18.458991 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.458995 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.459000 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.459005 | orchestrator | 2026-02-05 00:54:18.459010 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-05 00:54:18.459015 | orchestrator | Thursday 05 February 2026 00:50:38 +0000 (0:00:00.286) 0:02:22.769 ***** 2026-02-05 00:54:18.459019 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.459024 | orchestrator | 2026-02-05 00:54:18.459029 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-05 00:54:18.459034 | orchestrator | Thursday 05 February 2026 00:50:39 +0000 (0:00:00.963) 0:02:23.733 ***** 2026-02-05 00:54:18.459051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:54:18.459062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:54:18.459077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:54:18.459086 | orchestrator | 2026-02-05 00:54:18.459091 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-05 00:54:18.459096 | orchestrator | Thursday 05 February 2026 00:50:43 +0000 (0:00:03.440) 0:02:27.174 ***** 2026-02-05 00:54:18.459109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 00:54:18.459115 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.459120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 00:54:18.459129 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.459141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 00:54:18.459147 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.459152 | orchestrator | 2026-02-05 00:54:18.459157 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-05 00:54:18.459162 | orchestrator | Thursday 05 February 2026 00:50:43 +0000 (0:00:00.711) 0:02:27.885 ***** 2026-02-05 00:54:18.459168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-05 00:54:18.459174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 00:54:18.459181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-05 00:54:18.459192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-05 00:54:18.459197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 00:54:18.459202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 00:54:18.459207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-05 00:54:18.459213 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.459218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-05 00:54:18.459223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 00:54:18.459228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-05 00:54:18.459233 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.459238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-05 00:54:18.459247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 00:54:18.459253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-05 00:54:18.459258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-05 00:54:18.459263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-05 00:54:18.459271 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.459276 | orchestrator | 2026-02-05 00:54:18.459281 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-05 00:54:18.459286 | orchestrator | Thursday 05 February 2026 00:50:45 +0000 (0:00:01.904) 0:02:29.790 ***** 2026-02-05 00:54:18.459291 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.459296 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.459300 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.459305 | orchestrator | 2026-02-05 00:54:18.459310 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-05 00:54:18.459315 | orchestrator | Thursday 05 February 2026 00:50:46 +0000 (0:00:01.232) 0:02:31.022 ***** 2026-02-05 00:54:18.459320 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.459325 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.459329 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.459334 | orchestrator | 2026-02-05 00:54:18.459339 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-05 00:54:18.459344 | orchestrator | Thursday 05 February 2026 00:50:49 +0000 (0:00:02.045) 0:02:33.068 ***** 2026-02-05 00:54:18.459349 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.459353 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.459358 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.459363 | orchestrator | 2026-02-05 00:54:18.459368 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-05 00:54:18.459373 | orchestrator | Thursday 05 February 2026 00:50:49 +0000 (0:00:00.292) 0:02:33.360 ***** 2026-02-05 00:54:18.459377 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.459382 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.459387 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.459392 | orchestrator | 2026-02-05 00:54:18.459397 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-05 00:54:18.459401 | orchestrator | Thursday 05 February 2026 00:50:49 +0000 (0:00:00.325) 0:02:33.686 ***** 2026-02-05 00:54:18.459406 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.459411 | orchestrator | 2026-02-05 00:54:18.459416 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-05 00:54:18.459421 | orchestrator | Thursday 05 February 2026 00:50:50 +0000 (0:00:01.190) 0:02:34.876 ***** 2026-02-05 00:54:18.459456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:54:18.459472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:54:18.459482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:54:18.459487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:54:18.459493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:54:18.459498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:54:18.459506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 00:54:18.459519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:54:18.459525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:54:18.459530 | orchestrator | 2026-02-05 00:54:18.459535 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-05 00:54:18.459540 | orchestrator | Thursday 05 February 2026 00:50:54 +0000 (0:00:03.944) 0:02:38.821 ***** 2026-02-05 00:54:18.459545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 00:54:18.459551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 00:54:18.459560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:54:18.459572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:54:18.459578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 00:54:18.459584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:54:18.459589 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.459594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:54:18.459599 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.459604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 00:54:18.459612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 00:54:18.459650 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.459655 | orchestrator | 2026-02-05 00:54:18.459660 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-05 00:54:18.459809 | orchestrator | Thursday 05 February 2026 00:50:55 +0000 (0:00:00.550) 0:02:39.372 ***** 2026-02-05 00:54:18.459819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-05 00:54:18.459825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-05 00:54:18.459830 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.459835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-05 00:54:18.459840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-05 00:54:18.459845 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.459850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-05 00:54:18.459855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-05 00:54:18.459860 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.459865 | orchestrator | 2026-02-05 00:54:18.459870 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-05 00:54:18.459875 | orchestrator | Thursday 05 February 2026 00:50:56 +0000 (0:00:00.955) 0:02:40.328 ***** 2026-02-05 00:54:18.459880 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.459885 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.459890 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.459894 | orchestrator | 2026-02-05 00:54:18.459899 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-05 00:54:18.459904 | orchestrator | Thursday 05 February 2026 00:50:57 +0000 (0:00:01.302) 0:02:41.630 ***** 2026-02-05 00:54:18.459909 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.459914 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.459919 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.459923 | orchestrator | 2026-02-05 00:54:18.459928 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-05 00:54:18.459933 | orchestrator | Thursday 05 February 2026 00:50:59 +0000 (0:00:01.895) 0:02:43.525 ***** 2026-02-05 00:54:18.459938 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.459943 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.459947 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.459958 | orchestrator | 2026-02-05 00:54:18.459966 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-05 00:54:18.459973 | orchestrator | Thursday 05 February 2026 00:50:59 +0000 (0:00:00.292) 0:02:43.817 ***** 2026-02-05 00:54:18.459984 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.459993 | orchestrator | 2026-02-05 00:54:18.460001 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-05 00:54:18.460008 | orchestrator | Thursday 05 February 2026 00:51:00 +0000 (0:00:01.040) 0:02:44.858 ***** 2026-02-05 00:54:18.460023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 00:54:18.460038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 00:54:18.460055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 00:54:18.460081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460089 | orchestrator | 2026-02-05 00:54:18.460097 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-05 00:54:18.460105 | orchestrator | Thursday 05 February 2026 00:51:04 +0000 (0:00:03.401) 0:02:48.260 ***** 2026-02-05 00:54:18.460117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 00:54:18.460126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460134 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.460142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 00:54:18.460165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460171 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.460184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 00:54:18.460190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460195 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.460200 | orchestrator | 2026-02-05 00:54:18.460205 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-05 00:54:18.460210 | orchestrator | Thursday 05 February 2026 00:51:04 +0000 (0:00:00.660) 0:02:48.920 ***** 2026-02-05 00:54:18.460215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-05 00:54:18.460221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-05 00:54:18.460226 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.460231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-05 00:54:18.460236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-05 00:54:18.460244 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.460249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-05 00:54:18.460255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-05 00:54:18.460259 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.460265 | orchestrator | 2026-02-05 00:54:18.460270 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-05 00:54:18.460275 | orchestrator | Thursday 05 February 2026 00:51:05 +0000 (0:00:00.973) 0:02:49.893 ***** 2026-02-05 00:54:18.460280 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.460285 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.460290 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.460295 | orchestrator | 2026-02-05 00:54:18.460299 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-05 00:54:18.460304 | orchestrator | Thursday 05 February 2026 00:51:07 +0000 (0:00:01.238) 0:02:51.132 ***** 2026-02-05 00:54:18.460309 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.460314 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.460319 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.460324 | orchestrator | 2026-02-05 00:54:18.460329 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-05 00:54:18.460334 | orchestrator | Thursday 05 February 2026 00:51:09 +0000 (0:00:01.943) 0:02:53.076 ***** 2026-02-05 00:54:18.460338 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.460343 | orchestrator | 2026-02-05 00:54:18.460348 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-05 00:54:18.460353 | orchestrator | Thursday 05 February 2026 00:51:10 +0000 (0:00:01.450) 0:02:54.526 ***** 2026-02-05 00:54:18.460361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 00:54:18.460370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 00:54:18.460395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-05 00:54:18.460417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460444 | orchestrator | 2026-02-05 00:54:18.460450 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-05 00:54:18.460456 | orchestrator | Thursday 05 February 2026 00:51:13 +0000 (0:00:03.459) 0:02:57.986 ***** 2026-02-05 00:54:18.460466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-05 00:54:18.460472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-05 00:54:18.460494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460500 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.460506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460533 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.460539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-05 00:54:18.460545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.460562 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.460568 | orchestrator | 2026-02-05 00:54:18.460574 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-05 00:54:18.460579 | orchestrator | Thursday 05 February 2026 00:51:14 +0000 (0:00:00.961) 0:02:58.948 ***** 2026-02-05 00:54:18.460585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-05 00:54:18.460593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-05 00:54:18.460608 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.460614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-05 00:54:18.460625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-05 00:54:18.460631 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.460637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-05 00:54:18.460643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-05 00:54:18.460649 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.460655 | orchestrator | 2026-02-05 00:54:18.460660 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-05 00:54:18.460666 | orchestrator | Thursday 05 February 2026 00:51:15 +0000 (0:00:00.949) 0:02:59.897 ***** 2026-02-05 00:54:18.460671 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.460677 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.460683 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.460688 | orchestrator | 2026-02-05 00:54:18.460694 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-05 00:54:18.460700 | orchestrator | Thursday 05 February 2026 00:51:17 +0000 (0:00:01.282) 0:03:01.180 ***** 2026-02-05 00:54:18.460706 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.460712 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.460717 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.460723 | orchestrator | 2026-02-05 00:54:18.460754 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-05 00:54:18.460760 | orchestrator | Thursday 05 February 2026 00:51:19 +0000 (0:00:01.864) 0:03:03.045 ***** 2026-02-05 00:54:18.460766 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.460772 | orchestrator | 2026-02-05 00:54:18.460777 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-05 00:54:18.460782 | orchestrator | Thursday 05 February 2026 00:51:19 +0000 (0:00:00.982) 0:03:04.028 ***** 2026-02-05 00:54:18.460787 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 00:54:18.460793 | orchestrator | 2026-02-05 00:54:18.460798 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-05 00:54:18.460802 | orchestrator | Thursday 05 February 2026 00:51:23 +0000 (0:00:03.091) 0:03:07.120 ***** 2026-02-05 00:54:18.460808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:54:18.460826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 00:54:18.460832 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.460838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:54:18.460843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 00:54:18.460848 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.460860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:54:18.460871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 00:54:18.460876 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.460881 | orchestrator | 2026-02-05 00:54:18.460886 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-05 00:54:18.460891 | orchestrator | Thursday 05 February 2026 00:51:25 +0000 (0:00:02.371) 0:03:09.491 ***** 2026-02-05 00:54:18.460896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:54:18.460905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 00:54:18.460910 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.460921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:54:18.460927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 00:54:18.460933 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.460941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:54:18.460953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-05 00:54:18.460958 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.460963 | orchestrator | 2026-02-05 00:54:18.460968 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-05 00:54:18.460973 | orchestrator | Thursday 05 February 2026 00:51:27 +0000 (0:00:01.870) 0:03:11.362 ***** 2026-02-05 00:54:18.460978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 00:54:18.460984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 00:54:18.460989 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.460994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 00:54:18.461003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 00:54:18.461008 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.461013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 00:54:18.461023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-05 00:54:18.461029 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.461034 | orchestrator | 2026-02-05 00:54:18.461039 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-05 00:54:18.461044 | orchestrator | Thursday 05 February 2026 00:51:29 +0000 (0:00:02.222) 0:03:13.585 ***** 2026-02-05 00:54:18.461048 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.461053 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.461058 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.461063 | orchestrator | 2026-02-05 00:54:18.461068 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-05 00:54:18.461073 | orchestrator | Thursday 05 February 2026 00:51:31 +0000 (0:00:02.240) 0:03:15.825 ***** 2026-02-05 00:54:18.461078 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.461083 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.461087 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.461092 | orchestrator | 2026-02-05 00:54:18.461097 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-05 00:54:18.461102 | orchestrator | Thursday 05 February 2026 00:51:33 +0000 (0:00:01.258) 0:03:17.084 ***** 2026-02-05 00:54:18.461107 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.461112 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.461117 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.461122 | orchestrator | 2026-02-05 00:54:18.461127 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-05 00:54:18.461132 | orchestrator | Thursday 05 February 2026 00:51:33 +0000 (0:00:00.543) 0:03:17.627 ***** 2026-02-05 00:54:18.461137 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.461142 | orchestrator | 2026-02-05 00:54:18.461147 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-05 00:54:18.461151 | orchestrator | Thursday 05 February 2026 00:51:34 +0000 (0:00:01.085) 0:03:18.713 ***** 2026-02-05 00:54:18.461156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-05 00:54:18.461166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-05 00:54:18.461171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-05 00:54:18.461176 | orchestrator | 2026-02-05 00:54:18.461181 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-05 00:54:18.461188 | orchestrator | Thursday 05 February 2026 00:51:36 +0000 (0:00:01.564) 0:03:20.277 ***** 2026-02-05 00:54:18.461198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-05 00:54:18.461203 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.461208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-05 00:54:18.461216 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.461221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-05 00:54:18.461227 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.461232 | orchestrator | 2026-02-05 00:54:18.461236 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-05 00:54:18.461241 | orchestrator | Thursday 05 February 2026 00:51:36 +0000 (0:00:00.603) 0:03:20.881 ***** 2026-02-05 00:54:18.461246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-05 00:54:18.461252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-05 00:54:18.461257 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.461262 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.461268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-05 00:54:18.461273 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.461277 | orchestrator | 2026-02-05 00:54:18.461282 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-05 00:54:18.461287 | orchestrator | Thursday 05 February 2026 00:51:37 +0000 (0:00:00.532) 0:03:21.414 ***** 2026-02-05 00:54:18.461292 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.461297 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.461302 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.461307 | orchestrator | 2026-02-05 00:54:18.461312 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-05 00:54:18.461317 | orchestrator | Thursday 05 February 2026 00:51:37 +0000 (0:00:00.357) 0:03:21.772 ***** 2026-02-05 00:54:18.461322 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.461327 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.461334 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.461339 | orchestrator | 2026-02-05 00:54:18.461344 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-05 00:54:18.461349 | orchestrator | Thursday 05 February 2026 00:51:38 +0000 (0:00:01.145) 0:03:22.917 ***** 2026-02-05 00:54:18.461354 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.461359 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.461364 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.461369 | orchestrator | 2026-02-05 00:54:18.461374 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-05 00:54:18.461381 | orchestrator | Thursday 05 February 2026 00:51:39 +0000 (0:00:00.416) 0:03:23.334 ***** 2026-02-05 00:54:18.461386 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.461391 | orchestrator | 2026-02-05 00:54:18.461396 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-05 00:54:18.461405 | orchestrator | Thursday 05 February 2026 00:51:40 +0000 (0:00:01.122) 0:03:24.456 ***** 2026-02-05 00:54:18.461410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 00:54:18.461415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 00:54:18.461421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-05 00:54:18.461553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-05 00:54:18.461581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:54:18.461587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:54:18.461598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:54:18.461604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:54:18.461625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:54:18.461630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 00:54:18.461646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:54:18.461654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-05 00:54:18.461665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:54:18.461680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-05 00:54:18.461705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 00:54:18.461717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:54:18.461722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:54:18.461748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-05 00:54:18.461753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 00:54:18.461777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:54:18.461782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:54:18.461827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:54:18.461847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:54:18.461883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-05 00:54:18.461899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:54:18.461908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.461916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 00:54:18.461960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:54:18.461970 | orchestrator | 2026-02-05 00:54:18.461978 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-05 00:54:18.461986 | orchestrator | Thursday 05 February 2026 00:51:44 +0000 (0:00:04.203) 0:03:28.660 ***** 2026-02-05 00:54:18.461992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 00:54:18.461997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-05 00:54:18.462060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:54:18.462072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:54:18.462077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:54:18.462093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 00:54:18.462121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-05 00:54:18.462139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:54:18.462159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-05 00:54:18.462181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 00:54:18.462188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:54:18.462205 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.462215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:54:18.462230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:54:18.462236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 00:54:18.462249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:54:18.462260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-05 00:54:18.462290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:54:18.462304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-05 00:54:18.462312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 00:54:18.462332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:54:18.462341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:54:18.462347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:54:18.462352 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.462358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:54:18.462376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-05 00:54:18.462390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-05 00:54:18.462395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-05 00:54:18.462412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-05 00:54:18.462418 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.462423 | orchestrator | 2026-02-05 00:54:18.462428 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-05 00:54:18.462434 | orchestrator | Thursday 05 February 2026 00:51:46 +0000 (0:00:01.529) 0:03:30.190 ***** 2026-02-05 00:54:18.462440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-05 00:54:18.462445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-05 00:54:18.462451 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.462457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-05 00:54:18.462462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-05 00:54:18.462471 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.462476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-05 00:54:18.462504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-05 00:54:18.462511 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.462516 | orchestrator | 2026-02-05 00:54:18.462521 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-05 00:54:18.462526 | orchestrator | Thursday 05 February 2026 00:51:47 +0000 (0:00:01.343) 0:03:31.533 ***** 2026-02-05 00:54:18.462531 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.462537 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.462542 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.462547 | orchestrator | 2026-02-05 00:54:18.462552 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-05 00:54:18.462557 | orchestrator | Thursday 05 February 2026 00:51:48 +0000 (0:00:01.362) 0:03:32.896 ***** 2026-02-05 00:54:18.462562 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.462567 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.462572 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.462577 | orchestrator | 2026-02-05 00:54:18.462583 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-05 00:54:18.462588 | orchestrator | Thursday 05 February 2026 00:51:50 +0000 (0:00:01.949) 0:03:34.845 ***** 2026-02-05 00:54:18.462593 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.462598 | orchestrator | 2026-02-05 00:54:18.462603 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-05 00:54:18.462608 | orchestrator | Thursday 05 February 2026 00:51:52 +0000 (0:00:01.277) 0:03:36.122 ***** 2026-02-05 00:54:18.462614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.462626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.462636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.462642 | orchestrator | 2026-02-05 00:54:18.462647 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-05 00:54:18.462652 | orchestrator | Thursday 05 February 2026 00:51:55 +0000 (0:00:03.287) 0:03:39.410 ***** 2026-02-05 00:54:18.462658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.462663 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.462669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.462674 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.462691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.462700 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.462705 | orchestrator | 2026-02-05 00:54:18.462710 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-05 00:54:18.462716 | orchestrator | Thursday 05 February 2026 00:51:55 +0000 (0:00:00.451) 0:03:39.862 ***** 2026-02-05 00:54:18.462721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-05 00:54:18.462760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-05 00:54:18.462767 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.462772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-05 00:54:18.462777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-05 00:54:18.462782 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.462787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-05 00:54:18.462793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-05 00:54:18.462798 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.462803 | orchestrator | 2026-02-05 00:54:18.462808 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-05 00:54:18.462813 | orchestrator | Thursday 05 February 2026 00:51:56 +0000 (0:00:01.022) 0:03:40.885 ***** 2026-02-05 00:54:18.462818 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.462823 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.462829 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.462834 | orchestrator | 2026-02-05 00:54:18.462839 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-05 00:54:18.462844 | orchestrator | Thursday 05 February 2026 00:51:58 +0000 (0:00:01.256) 0:03:42.141 ***** 2026-02-05 00:54:18.462849 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.462855 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.462860 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.462865 | orchestrator | 2026-02-05 00:54:18.462870 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-05 00:54:18.462875 | orchestrator | Thursday 05 February 2026 00:51:59 +0000 (0:00:01.872) 0:03:44.014 ***** 2026-02-05 00:54:18.462881 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.462886 | orchestrator | 2026-02-05 00:54:18.462891 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-05 00:54:18.462896 | orchestrator | Thursday 05 February 2026 00:52:01 +0000 (0:00:01.331) 0:03:45.346 ***** 2026-02-05 00:54:18.462905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.462919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.462926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.462966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.462977 | orchestrator | 2026-02-05 00:54:18.462982 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-05 00:54:18.462988 | orchestrator | Thursday 05 February 2026 00:52:05 +0000 (0:00:03.798) 0:03:49.144 ***** 2026-02-05 00:54:18.462993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.463005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.463015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.463020 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.463026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.463032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.463037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.463046 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.463054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.463063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.463069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.463074 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.463080 | orchestrator | 2026-02-05 00:54:18.463085 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-05 00:54:18.463090 | orchestrator | Thursday 05 February 2026 00:52:05 +0000 (0:00:00.575) 0:03:49.720 ***** 2026-02-05 00:54:18.463096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-05 00:54:18.463102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-05 00:54:18.463108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-05 00:54:18.463113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-05 00:54:18.463119 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.463124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-05 00:54:18.463136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-05 00:54:18.463141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-05 00:54:18.463146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-05 00:54:18.463152 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.463157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-05 00:54:18.463162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-05 00:54:18.463170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-05 00:54:18.463176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-05 00:54:18.463181 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.463186 | orchestrator | 2026-02-05 00:54:18.463195 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-05 00:54:18.463200 | orchestrator | Thursday 05 February 2026 00:52:06 +0000 (0:00:01.003) 0:03:50.723 ***** 2026-02-05 00:54:18.463205 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.463211 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.463216 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.463221 | orchestrator | 2026-02-05 00:54:18.463226 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-05 00:54:18.463231 | orchestrator | Thursday 05 February 2026 00:52:08 +0000 (0:00:01.447) 0:03:52.170 ***** 2026-02-05 00:54:18.463237 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.463242 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.463247 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.463252 | orchestrator | 2026-02-05 00:54:18.463258 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-05 00:54:18.463263 | orchestrator | Thursday 05 February 2026 00:52:10 +0000 (0:00:01.995) 0:03:54.165 ***** 2026-02-05 00:54:18.463268 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.463273 | orchestrator | 2026-02-05 00:54:18.463278 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-05 00:54:18.463283 | orchestrator | Thursday 05 February 2026 00:52:11 +0000 (0:00:01.217) 0:03:55.383 ***** 2026-02-05 00:54:18.463289 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-05 00:54:18.463294 | orchestrator | 2026-02-05 00:54:18.463300 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-05 00:54:18.463305 | orchestrator | Thursday 05 February 2026 00:52:12 +0000 (0:00:00.977) 0:03:56.360 ***** 2026-02-05 00:54:18.463310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-05 00:54:18.463319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-05 00:54:18.463325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-05 00:54:18.463330 | orchestrator | 2026-02-05 00:54:18.463335 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-05 00:54:18.463341 | orchestrator | Thursday 05 February 2026 00:52:15 +0000 (0:00:03.536) 0:03:59.896 ***** 2026-02-05 00:54:18.463346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 00:54:18.463352 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.463360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 00:54:18.463365 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.463374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 00:54:18.463379 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.463385 | orchestrator | 2026-02-05 00:54:18.463390 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-05 00:54:18.463395 | orchestrator | Thursday 05 February 2026 00:52:17 +0000 (0:00:01.293) 0:04:01.190 ***** 2026-02-05 00:54:18.463400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 00:54:18.463409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 00:54:18.463415 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.463420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 00:54:18.463425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 00:54:18.463431 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.463436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 00:54:18.463442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-05 00:54:18.463447 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.463452 | orchestrator | 2026-02-05 00:54:18.463457 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-05 00:54:18.463463 | orchestrator | Thursday 05 February 2026 00:52:18 +0000 (0:00:01.705) 0:04:02.895 ***** 2026-02-05 00:54:18.463468 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.463473 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.463478 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.463483 | orchestrator | 2026-02-05 00:54:18.463488 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-05 00:54:18.463494 | orchestrator | Thursday 05 February 2026 00:52:21 +0000 (0:00:02.312) 0:04:05.208 ***** 2026-02-05 00:54:18.463499 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.463504 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.463509 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.463514 | orchestrator | 2026-02-05 00:54:18.463519 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-05 00:54:18.463525 | orchestrator | Thursday 05 February 2026 00:52:24 +0000 (0:00:02.923) 0:04:08.132 ***** 2026-02-05 00:54:18.463530 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-05 00:54:18.463535 | orchestrator | 2026-02-05 00:54:18.463541 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-05 00:54:18.463546 | orchestrator | Thursday 05 February 2026 00:52:24 +0000 (0:00:00.732) 0:04:08.865 ***** 2026-02-05 00:54:18.463554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 00:54:18.463559 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.463568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 00:54:18.463577 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.463582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 00:54:18.463588 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.463593 | orchestrator | 2026-02-05 00:54:18.463598 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-05 00:54:18.463603 | orchestrator | Thursday 05 February 2026 00:52:26 +0000 (0:00:01.367) 0:04:10.232 ***** 2026-02-05 00:54:18.463609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 00:54:18.463614 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.463619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 00:54:18.463625 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.463630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-05 00:54:18.463636 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.463641 | orchestrator | 2026-02-05 00:54:18.463646 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-05 00:54:18.463651 | orchestrator | Thursday 05 February 2026 00:52:27 +0000 (0:00:01.211) 0:04:11.444 ***** 2026-02-05 00:54:18.463656 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.463661 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.463666 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.463672 | orchestrator | 2026-02-05 00:54:18.463677 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-05 00:54:18.463682 | orchestrator | Thursday 05 February 2026 00:52:28 +0000 (0:00:01.269) 0:04:12.713 ***** 2026-02-05 00:54:18.463687 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:54:18.463693 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:54:18.463698 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:54:18.463703 | orchestrator | 2026-02-05 00:54:18.463709 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-05 00:54:18.463716 | orchestrator | Thursday 05 February 2026 00:52:30 +0000 (0:00:02.271) 0:04:14.985 ***** 2026-02-05 00:54:18.463722 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:54:18.463763 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:54:18.463772 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:54:18.463779 | orchestrator | 2026-02-05 00:54:18.463789 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-05 00:54:18.463802 | orchestrator | Thursday 05 February 2026 00:52:33 +0000 (0:00:02.870) 0:04:17.855 ***** 2026-02-05 00:54:18.463810 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-05 00:54:18.463818 | orchestrator | 2026-02-05 00:54:18.463826 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-05 00:54:18.463835 | orchestrator | Thursday 05 February 2026 00:52:34 +0000 (0:00:01.151) 0:04:19.007 ***** 2026-02-05 00:54:18.463849 | orchestrator | sk2026-02-05 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:18.463859 | orchestrator | ipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 00:54:18.463869 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.463878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 00:54:18.463886 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.463891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 00:54:18.463897 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.463902 | orchestrator | 2026-02-05 00:54:18.463907 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-05 00:54:18.463912 | orchestrator | Thursday 05 February 2026 00:52:36 +0000 (0:00:01.098) 0:04:20.106 ***** 2026-02-05 00:54:18.463918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 00:54:18.463923 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.463928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 00:54:18.463938 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.463944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-05 00:54:18.463954 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.463959 | orchestrator | 2026-02-05 00:54:18.463964 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-05 00:54:18.463969 | orchestrator | Thursday 05 February 2026 00:52:37 +0000 (0:00:01.212) 0:04:21.318 ***** 2026-02-05 00:54:18.463974 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.463979 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.463985 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.463990 | orchestrator | 2026-02-05 00:54:18.463995 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-05 00:54:18.464003 | orchestrator | Thursday 05 February 2026 00:52:39 +0000 (0:00:01.783) 0:04:23.101 ***** 2026-02-05 00:54:18.464009 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:54:18.464014 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:54:18.464019 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:54:18.464024 | orchestrator | 2026-02-05 00:54:18.464029 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-05 00:54:18.464034 | orchestrator | Thursday 05 February 2026 00:52:41 +0000 (0:00:02.267) 0:04:25.368 ***** 2026-02-05 00:54:18.464039 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:54:18.464044 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:54:18.464049 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:54:18.464054 | orchestrator | 2026-02-05 00:54:18.464060 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-05 00:54:18.464065 | orchestrator | Thursday 05 February 2026 00:52:44 +0000 (0:00:03.157) 0:04:28.526 ***** 2026-02-05 00:54:18.464070 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.464075 | orchestrator | 2026-02-05 00:54:18.464080 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-05 00:54:18.464085 | orchestrator | Thursday 05 February 2026 00:52:45 +0000 (0:00:01.292) 0:04:29.819 ***** 2026-02-05 00:54:18.464193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.464203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 00:54:18.464213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 00:54:18.464219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 00:54:18.464228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.464234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.464253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 00:54:18.464259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 00:54:18.464268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 00:54:18.464274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.464283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.464289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 00:54:18.464307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 00:54:18.464314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 00:54:18.464323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.464329 | orchestrator | 2026-02-05 00:54:18.464335 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-05 00:54:18.464340 | orchestrator | Thursday 05 February 2026 00:52:49 +0000 (0:00:03.978) 0:04:33.797 ***** 2026-02-05 00:54:18.464346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.464354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 00:54:18.464360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 00:54:18.464379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 00:54:18.464385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.464395 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.464401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.464406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 00:54:18.464415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 00:54:18.464421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 00:54:18.464427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.464445 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.464455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.464461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 00:54:18.464467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 00:54:18.464473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 00:54:18.464481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 00:54:18.464486 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.464492 | orchestrator | 2026-02-05 00:54:18.464497 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-05 00:54:18.464503 | orchestrator | Thursday 05 February 2026 00:52:50 +0000 (0:00:00.805) 0:04:34.602 ***** 2026-02-05 00:54:18.464509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 00:54:18.464514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 00:54:18.464523 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.464541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 00:54:18.464547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 00:54:18.464553 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.464559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 00:54:18.464564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-05 00:54:18.464570 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.464575 | orchestrator | 2026-02-05 00:54:18.464581 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-05 00:54:18.464586 | orchestrator | Thursday 05 February 2026 00:52:51 +0000 (0:00:00.950) 0:04:35.553 ***** 2026-02-05 00:54:18.464591 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.464597 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.464602 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.464608 | orchestrator | 2026-02-05 00:54:18.464613 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-05 00:54:18.464618 | orchestrator | Thursday 05 February 2026 00:52:53 +0000 (0:00:01.669) 0:04:37.222 ***** 2026-02-05 00:54:18.464624 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.464629 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.464635 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.464640 | orchestrator | 2026-02-05 00:54:18.464645 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-05 00:54:18.464651 | orchestrator | Thursday 05 February 2026 00:52:55 +0000 (0:00:02.081) 0:04:39.303 ***** 2026-02-05 00:54:18.464656 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.464662 | orchestrator | 2026-02-05 00:54:18.464667 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-05 00:54:18.464673 | orchestrator | Thursday 05 February 2026 00:52:56 +0000 (0:00:01.521) 0:04:40.825 ***** 2026-02-05 00:54:18.464679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:54:18.464689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:54:18.464711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:54:18.464719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:54:18.464743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:54:18.464750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:54:18.464761 | orchestrator | 2026-02-05 00:54:18.464767 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-05 00:54:18.464773 | orchestrator | Thursday 05 February 2026 00:53:01 +0000 (0:00:04.823) 0:04:45.649 ***** 2026-02-05 00:54:18.464793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 00:54:18.464800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 00:54:18.464807 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.464832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 00:54:18.464842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 00:54:18.464853 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.464874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 00:54:18.464882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 00:54:18.464888 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.464895 | orchestrator | 2026-02-05 00:54:18.464902 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-05 00:54:18.464908 | orchestrator | Thursday 05 February 2026 00:53:02 +0000 (0:00:01.079) 0:04:46.728 ***** 2026-02-05 00:54:18.464915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-05 00:54:18.464922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-05 00:54:18.464929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-05 00:54:18.464937 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.464943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-05 00:54:18.464949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-05 00:54:18.464959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-05 00:54:18.464964 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.464973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-05 00:54:18.464978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-05 00:54:18.464984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-05 00:54:18.464990 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.464995 | orchestrator | 2026-02-05 00:54:18.465001 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-05 00:54:18.465007 | orchestrator | Thursday 05 February 2026 00:53:03 +0000 (0:00:00.826) 0:04:47.555 ***** 2026-02-05 00:54:18.465012 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.465018 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.465023 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.465028 | orchestrator | 2026-02-05 00:54:18.465034 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-05 00:54:18.465039 | orchestrator | Thursday 05 February 2026 00:53:03 +0000 (0:00:00.389) 0:04:47.945 ***** 2026-02-05 00:54:18.465045 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.465050 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.465056 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.465061 | orchestrator | 2026-02-05 00:54:18.465080 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-05 00:54:18.465086 | orchestrator | Thursday 05 February 2026 00:53:05 +0000 (0:00:01.132) 0:04:49.077 ***** 2026-02-05 00:54:18.465092 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.465097 | orchestrator | 2026-02-05 00:54:18.465102 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-05 00:54:18.465108 | orchestrator | Thursday 05 February 2026 00:53:06 +0000 (0:00:01.508) 0:04:50.585 ***** 2026-02-05 00:54:18.465114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 00:54:18.465120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 00:54:18.465130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 00:54:18.465138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 00:54:18.465170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 00:54:18.465177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 00:54:18.465198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 00:54:18.465206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 00:54:18.465212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 00:54:18.465233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 00:54:18.465243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-05 00:54:18.465251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 00:54:18.465258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-05 00:54:18.465273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 00:54:18.465294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 00:54:18.465312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 00:54:18.465318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-05 00:54:18.465329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 00:54:18.465346 | orchestrator | 2026-02-05 00:54:18.465351 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-05 00:54:18.465357 | orchestrator | Thursday 05 February 2026 00:53:10 +0000 (0:00:03.713) 0:04:54.299 ***** 2026-02-05 00:54:18.465365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-05 00:54:18.465371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 00:54:18.465380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 00:54:18.465404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-05 00:54:18.465414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-05 00:54:18.465419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 00:54:18.465444 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.465450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-05 00:54:18.465456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 00:54:18.465461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-05 00:54:18.465484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 00:54:18.465494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 00:54:18.465499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-05 00:54:18.465505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-05 00:54:18.465519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 00:54:18.465544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-05 00:54:18.465556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 00:54:18.465564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-05 00:54:18.465570 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.465576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 00:54:18.465594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 00:54:18.465600 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.465605 | orchestrator | 2026-02-05 00:54:18.465611 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-05 00:54:18.465616 | orchestrator | Thursday 05 February 2026 00:53:11 +0000 (0:00:00.743) 0:04:55.042 ***** 2026-02-05 00:54:18.465622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-05 00:54:18.465628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-05 00:54:18.465634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-05 00:54:18.465640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-05 00:54:18.465646 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.465651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-05 00:54:18.465657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-05 00:54:18.465662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-05 00:54:18.465671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-05 00:54:18.465676 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.465682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-05 00:54:18.465693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-05 00:54:18.465699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-05 00:54:18.465707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-05 00:54:18.465713 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.465719 | orchestrator | 2026-02-05 00:54:18.465737 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-05 00:54:18.465743 | orchestrator | Thursday 05 February 2026 00:53:12 +0000 (0:00:01.385) 0:04:56.428 ***** 2026-02-05 00:54:18.465748 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.465754 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.465759 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.465769 | orchestrator | 2026-02-05 00:54:18.465775 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-05 00:54:18.465780 | orchestrator | Thursday 05 February 2026 00:53:12 +0000 (0:00:00.391) 0:04:56.819 ***** 2026-02-05 00:54:18.465785 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.465791 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.465796 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.465802 | orchestrator | 2026-02-05 00:54:18.465807 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-05 00:54:18.465813 | orchestrator | Thursday 05 February 2026 00:53:13 +0000 (0:00:01.118) 0:04:57.937 ***** 2026-02-05 00:54:18.465818 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.465824 | orchestrator | 2026-02-05 00:54:18.465829 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-05 00:54:18.465835 | orchestrator | Thursday 05 February 2026 00:53:15 +0000 (0:00:01.535) 0:04:59.473 ***** 2026-02-05 00:54:18.465840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:54:18.465851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:54:18.465861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-05 00:54:18.465867 | orchestrator | 2026-02-05 00:54:18.465872 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-05 00:54:18.465880 | orchestrator | Thursday 05 February 2026 00:53:17 +0000 (0:00:01.991) 0:05:01.464 ***** 2026-02-05 00:54:18.465886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 00:54:18.465893 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.465899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 00:54:18.465904 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.465918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-05 00:54:18.465928 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.465936 | orchestrator | 2026-02-05 00:54:18.465946 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-05 00:54:18.465954 | orchestrator | Thursday 05 February 2026 00:53:17 +0000 (0:00:00.346) 0:05:01.810 ***** 2026-02-05 00:54:18.465963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-05 00:54:18.465972 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.465980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-05 00:54:18.465988 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.465995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-05 00:54:18.466003 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.466012 | orchestrator | 2026-02-05 00:54:18.466056 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-05 00:54:18.466066 | orchestrator | Thursday 05 February 2026 00:53:18 +0000 (0:00:00.581) 0:05:02.391 ***** 2026-02-05 00:54:18.466074 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.466090 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.466098 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.466106 | orchestrator | 2026-02-05 00:54:18.466114 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-05 00:54:18.466123 | orchestrator | Thursday 05 February 2026 00:53:19 +0000 (0:00:00.679) 0:05:03.071 ***** 2026-02-05 00:54:18.466132 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.466142 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.466149 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.466157 | orchestrator | 2026-02-05 00:54:18.466170 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-05 00:54:18.466183 | orchestrator | Thursday 05 February 2026 00:53:20 +0000 (0:00:01.164) 0:05:04.236 ***** 2026-02-05 00:54:18.466191 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:54:18.466199 | orchestrator | 2026-02-05 00:54:18.466208 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-05 00:54:18.466217 | orchestrator | Thursday 05 February 2026 00:53:21 +0000 (0:00:01.410) 0:05:05.646 ***** 2026-02-05 00:54:18.466226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.466246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.466261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.466278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.466288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.466298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-05 00:54:18.466313 | orchestrator | 2026-02-05 00:54:18.466322 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-05 00:54:18.466331 | orchestrator | Thursday 05 February 2026 00:53:27 +0000 (0:00:05.655) 0:05:11.301 ***** 2026-02-05 00:54:18.466345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.466355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.466361 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.466367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.466377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.466382 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.466391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.466397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-05 00:54:18.466403 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.466408 | orchestrator | 2026-02-05 00:54:18.466414 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-05 00:54:18.466422 | orchestrator | Thursday 05 February 2026 00:53:28 +0000 (0:00:00.878) 0:05:12.180 ***** 2026-02-05 00:54:18.466428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-05 00:54:18.466434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-05 00:54:18.466440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-05 00:54:18.466446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-05 00:54:18.466455 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.466461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-05 00:54:18.466466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-05 00:54:18.466472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-05 00:54:18.466478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-05 00:54:18.466483 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.466489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-05 00:54:18.466494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-05 00:54:18.466500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-05 00:54:18.466509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-05 00:54:18.466515 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.466520 | orchestrator | 2026-02-05 00:54:18.466526 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-05 00:54:18.466531 | orchestrator | Thursday 05 February 2026 00:53:29 +0000 (0:00:00.849) 0:05:13.029 ***** 2026-02-05 00:54:18.466537 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.466542 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.466547 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.466553 | orchestrator | 2026-02-05 00:54:18.466558 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-05 00:54:18.466564 | orchestrator | Thursday 05 February 2026 00:53:30 +0000 (0:00:01.363) 0:05:14.393 ***** 2026-02-05 00:54:18.466569 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.466575 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.466580 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.466585 | orchestrator | 2026-02-05 00:54:18.466591 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-05 00:54:18.466596 | orchestrator | Thursday 05 February 2026 00:53:32 +0000 (0:00:02.281) 0:05:16.675 ***** 2026-02-05 00:54:18.466602 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.466607 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.466613 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.466618 | orchestrator | 2026-02-05 00:54:18.466624 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-05 00:54:18.466629 | orchestrator | Thursday 05 February 2026 00:53:33 +0000 (0:00:00.653) 0:05:17.328 ***** 2026-02-05 00:54:18.466634 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.466640 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.466649 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.466654 | orchestrator | 2026-02-05 00:54:18.466660 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-05 00:54:18.466668 | orchestrator | Thursday 05 February 2026 00:53:33 +0000 (0:00:00.319) 0:05:17.648 ***** 2026-02-05 00:54:18.466674 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.466679 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.466685 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.466690 | orchestrator | 2026-02-05 00:54:18.466696 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-05 00:54:18.466701 | orchestrator | Thursday 05 February 2026 00:53:33 +0000 (0:00:00.317) 0:05:17.966 ***** 2026-02-05 00:54:18.466707 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.466712 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.466717 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.466723 | orchestrator | 2026-02-05 00:54:18.466748 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-05 00:54:18.466753 | orchestrator | Thursday 05 February 2026 00:53:34 +0000 (0:00:00.365) 0:05:18.331 ***** 2026-02-05 00:54:18.466759 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.466764 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.466769 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.466775 | orchestrator | 2026-02-05 00:54:18.466780 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-05 00:54:18.466786 | orchestrator | Thursday 05 February 2026 00:53:34 +0000 (0:00:00.679) 0:05:19.010 ***** 2026-02-05 00:54:18.466791 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.466796 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.466802 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.466807 | orchestrator | 2026-02-05 00:54:18.466813 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-05 00:54:18.466818 | orchestrator | Thursday 05 February 2026 00:53:35 +0000 (0:00:00.541) 0:05:19.552 ***** 2026-02-05 00:54:18.466824 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:54:18.466829 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:54:18.466835 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:54:18.466840 | orchestrator | 2026-02-05 00:54:18.466846 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-05 00:54:18.466851 | orchestrator | Thursday 05 February 2026 00:53:36 +0000 (0:00:00.721) 0:05:20.273 ***** 2026-02-05 00:54:18.466857 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:54:18.466862 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:54:18.466868 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:54:18.466873 | orchestrator | 2026-02-05 00:54:18.466878 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-05 00:54:18.466884 | orchestrator | Thursday 05 February 2026 00:53:36 +0000 (0:00:00.353) 0:05:20.627 ***** 2026-02-05 00:54:18.466889 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:54:18.466895 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:54:18.466900 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:54:18.466905 | orchestrator | 2026-02-05 00:54:18.466911 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-05 00:54:18.466917 | orchestrator | Thursday 05 February 2026 00:53:37 +0000 (0:00:01.268) 0:05:21.895 ***** 2026-02-05 00:54:18.466922 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:54:18.466928 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:54:18.466933 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:54:18.466938 | orchestrator | 2026-02-05 00:54:18.466944 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-05 00:54:18.466949 | orchestrator | Thursday 05 February 2026 00:53:38 +0000 (0:00:00.841) 0:05:22.737 ***** 2026-02-05 00:54:18.466955 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:54:18.466960 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:54:18.466965 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:54:18.466971 | orchestrator | 2026-02-05 00:54:18.466980 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-05 00:54:18.466986 | orchestrator | Thursday 05 February 2026 00:53:39 +0000 (0:00:00.809) 0:05:23.546 ***** 2026-02-05 00:54:18.466991 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.466997 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.467002 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.467008 | orchestrator | 2026-02-05 00:54:18.467013 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-05 00:54:18.467019 | orchestrator | Thursday 05 February 2026 00:53:48 +0000 (0:00:09.067) 0:05:32.614 ***** 2026-02-05 00:54:18.467024 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:54:18.467029 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:54:18.467038 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:54:18.467044 | orchestrator | 2026-02-05 00:54:18.467049 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-05 00:54:18.467055 | orchestrator | Thursday 05 February 2026 00:53:49 +0000 (0:00:01.129) 0:05:33.744 ***** 2026-02-05 00:54:18.467060 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.467066 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.467071 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.467077 | orchestrator | 2026-02-05 00:54:18.467082 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-05 00:54:18.467088 | orchestrator | Thursday 05 February 2026 00:54:02 +0000 (0:00:13.027) 0:05:46.772 ***** 2026-02-05 00:54:18.467093 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:54:18.467099 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:54:18.467104 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:54:18.467109 | orchestrator | 2026-02-05 00:54:18.467115 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-05 00:54:18.467120 | orchestrator | Thursday 05 February 2026 00:54:03 +0000 (0:00:00.755) 0:05:47.527 ***** 2026-02-05 00:54:18.467126 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:54:18.467131 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:54:18.467137 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:54:18.467142 | orchestrator | 2026-02-05 00:54:18.467147 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-05 00:54:18.467153 | orchestrator | Thursday 05 February 2026 00:54:12 +0000 (0:00:09.391) 0:05:56.919 ***** 2026-02-05 00:54:18.467158 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.467164 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.467169 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.467174 | orchestrator | 2026-02-05 00:54:18.467180 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-05 00:54:18.467185 | orchestrator | Thursday 05 February 2026 00:54:13 +0000 (0:00:00.534) 0:05:57.454 ***** 2026-02-05 00:54:18.467191 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.467199 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.467205 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.467211 | orchestrator | 2026-02-05 00:54:18.467216 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-05 00:54:18.467222 | orchestrator | Thursday 05 February 2026 00:54:13 +0000 (0:00:00.295) 0:05:57.749 ***** 2026-02-05 00:54:18.467227 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.467233 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.467238 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.467244 | orchestrator | 2026-02-05 00:54:18.467249 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-05 00:54:18.467254 | orchestrator | Thursday 05 February 2026 00:54:14 +0000 (0:00:00.307) 0:05:58.057 ***** 2026-02-05 00:54:18.467260 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.467265 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.467271 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.467276 | orchestrator | 2026-02-05 00:54:18.467282 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-05 00:54:18.467291 | orchestrator | Thursday 05 February 2026 00:54:14 +0000 (0:00:00.285) 0:05:58.343 ***** 2026-02-05 00:54:18.467296 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.467302 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.467307 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.467313 | orchestrator | 2026-02-05 00:54:18.467318 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-05 00:54:18.467324 | orchestrator | Thursday 05 February 2026 00:54:14 +0000 (0:00:00.520) 0:05:58.863 ***** 2026-02-05 00:54:18.467329 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:54:18.467335 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:54:18.467340 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:54:18.467345 | orchestrator | 2026-02-05 00:54:18.467351 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-05 00:54:18.467356 | orchestrator | Thursday 05 February 2026 00:54:15 +0000 (0:00:00.354) 0:05:59.218 ***** 2026-02-05 00:54:18.467362 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:54:18.467367 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:54:18.467373 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:54:18.467378 | orchestrator | 2026-02-05 00:54:18.467384 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-05 00:54:18.467389 | orchestrator | Thursday 05 February 2026 00:54:16 +0000 (0:00:00.925) 0:06:00.143 ***** 2026-02-05 00:54:18.467395 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:54:18.467400 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:54:18.467406 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:54:18.467411 | orchestrator | 2026-02-05 00:54:18.467417 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:54:18.467422 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-05 00:54:18.467428 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-05 00:54:18.467434 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-05 00:54:18.467439 | orchestrator | 2026-02-05 00:54:18.467445 | orchestrator | 2026-02-05 00:54:18.467450 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:54:18.467456 | orchestrator | Thursday 05 February 2026 00:54:16 +0000 (0:00:00.857) 0:06:01.000 ***** 2026-02-05 00:54:18.467461 | orchestrator | =============================================================================== 2026-02-05 00:54:18.467466 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.03s 2026-02-05 00:54:18.467472 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.39s 2026-02-05 00:54:18.467477 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.07s 2026-02-05 00:54:18.467486 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.66s 2026-02-05 00:54:18.467491 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.26s 2026-02-05 00:54:18.467497 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.07s 2026-02-05 00:54:18.467502 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.82s 2026-02-05 00:54:18.467508 | orchestrator | loadbalancer : Ensuring proxysql service config subdirectories exist ---- 4.32s 2026-02-05 00:54:18.467513 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.20s 2026-02-05 00:54:18.467518 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.04s 2026-02-05 00:54:18.467524 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.02s 2026-02-05 00:54:18.467529 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.98s 2026-02-05 00:54:18.467534 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.94s 2026-02-05 00:54:18.467543 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.88s 2026-02-05 00:54:18.467549 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.80s 2026-02-05 00:54:18.467554 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 3.71s 2026-02-05 00:54:18.467560 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.68s 2026-02-05 00:54:18.467565 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 3.60s 2026-02-05 00:54:18.467570 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.57s 2026-02-05 00:54:18.467576 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.54s 2026-02-05 00:54:21.486425 | orchestrator | 2026-02-05 00:54:21 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:54:21.489257 | orchestrator | 2026-02-05 00:54:21 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:21.490621 | orchestrator | 2026-02-05 00:54:21 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:54:21.490942 | orchestrator | 2026-02-05 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:24.532758 | orchestrator | 2026-02-05 00:54:24 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:54:24.533215 | orchestrator | 2026-02-05 00:54:24 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:24.535554 | orchestrator | 2026-02-05 00:54:24 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:54:24.535654 | orchestrator | 2026-02-05 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:27.558466 | orchestrator | 2026-02-05 00:54:27 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:54:27.558625 | orchestrator | 2026-02-05 00:54:27 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:27.560876 | orchestrator | 2026-02-05 00:54:27 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:54:27.560926 | orchestrator | 2026-02-05 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:30.597439 | orchestrator | 2026-02-05 00:54:30 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:54:30.600937 | orchestrator | 2026-02-05 00:54:30 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:30.604273 | orchestrator | 2026-02-05 00:54:30 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:54:30.604342 | orchestrator | 2026-02-05 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:33.732241 | orchestrator | 2026-02-05 00:54:33 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:54:33.732960 | orchestrator | 2026-02-05 00:54:33 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:33.735008 | orchestrator | 2026-02-05 00:54:33 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:54:33.735048 | orchestrator | 2026-02-05 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:36.773358 | orchestrator | 2026-02-05 00:54:36 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:54:36.774814 | orchestrator | 2026-02-05 00:54:36 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:36.776848 | orchestrator | 2026-02-05 00:54:36 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:54:36.776945 | orchestrator | 2026-02-05 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:39.814517 | orchestrator | 2026-02-05 00:54:39 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:54:39.815275 | orchestrator | 2026-02-05 00:54:39 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:39.816292 | orchestrator | 2026-02-05 00:54:39 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:54:39.816340 | orchestrator | 2026-02-05 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:42.858543 | orchestrator | 2026-02-05 00:54:42 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:54:42.859782 | orchestrator | 2026-02-05 00:54:42 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:42.862675 | orchestrator | 2026-02-05 00:54:42 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:54:42.862763 | orchestrator | 2026-02-05 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:45.908625 | orchestrator | 2026-02-05 00:54:45 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:54:45.910758 | orchestrator | 2026-02-05 00:54:45 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:45.912910 | orchestrator | 2026-02-05 00:54:45 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:54:45.912952 | orchestrator | 2026-02-05 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:48.942915 | orchestrator | 2026-02-05 00:54:48 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:54:48.943932 | orchestrator | 2026-02-05 00:54:48 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:48.945812 | orchestrator | 2026-02-05 00:54:48 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:54:48.945862 | orchestrator | 2026-02-05 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:52.045345 | orchestrator | 2026-02-05 00:54:52 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:54:52.047198 | orchestrator | 2026-02-05 00:54:52 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:52.049324 | orchestrator | 2026-02-05 00:54:52 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:54:52.049364 | orchestrator | 2026-02-05 00:54:52 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:55.104563 | orchestrator | 2026-02-05 00:54:55 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:54:55.107278 | orchestrator | 2026-02-05 00:54:55 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:55.110576 | orchestrator | 2026-02-05 00:54:55 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:54:55.111158 | orchestrator | 2026-02-05 00:54:55 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:54:58.177301 | orchestrator | 2026-02-05 00:54:58 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:54:58.179248 | orchestrator | 2026-02-05 00:54:58 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:54:58.181898 | orchestrator | 2026-02-05 00:54:58 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:54:58.181963 | orchestrator | 2026-02-05 00:54:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:01.215239 | orchestrator | 2026-02-05 00:55:01 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:01.216424 | orchestrator | 2026-02-05 00:55:01 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:01.218646 | orchestrator | 2026-02-05 00:55:01 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:01.218724 | orchestrator | 2026-02-05 00:55:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:04.261031 | orchestrator | 2026-02-05 00:55:04 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:04.263418 | orchestrator | 2026-02-05 00:55:04 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:04.265584 | orchestrator | 2026-02-05 00:55:04 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:04.265634 | orchestrator | 2026-02-05 00:55:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:07.311277 | orchestrator | 2026-02-05 00:55:07 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:07.312829 | orchestrator | 2026-02-05 00:55:07 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:07.314819 | orchestrator | 2026-02-05 00:55:07 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:07.314866 | orchestrator | 2026-02-05 00:55:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:10.357143 | orchestrator | 2026-02-05 00:55:10 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:10.359494 | orchestrator | 2026-02-05 00:55:10 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:10.362239 | orchestrator | 2026-02-05 00:55:10 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:10.362299 | orchestrator | 2026-02-05 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:13.400820 | orchestrator | 2026-02-05 00:55:13 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:13.404335 | orchestrator | 2026-02-05 00:55:13 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:13.406190 | orchestrator | 2026-02-05 00:55:13 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:13.406265 | orchestrator | 2026-02-05 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:16.458546 | orchestrator | 2026-02-05 00:55:16 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:16.460158 | orchestrator | 2026-02-05 00:55:16 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:16.462246 | orchestrator | 2026-02-05 00:55:16 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:16.462333 | orchestrator | 2026-02-05 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:19.509756 | orchestrator | 2026-02-05 00:55:19 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:19.511240 | orchestrator | 2026-02-05 00:55:19 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:19.512989 | orchestrator | 2026-02-05 00:55:19 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:19.513038 | orchestrator | 2026-02-05 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:22.560723 | orchestrator | 2026-02-05 00:55:22 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:22.563338 | orchestrator | 2026-02-05 00:55:22 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:22.565790 | orchestrator | 2026-02-05 00:55:22 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:22.566075 | orchestrator | 2026-02-05 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:25.603279 | orchestrator | 2026-02-05 00:55:25 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:25.606371 | orchestrator | 2026-02-05 00:55:25 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:25.611600 | orchestrator | 2026-02-05 00:55:25 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:25.611735 | orchestrator | 2026-02-05 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:28.659213 | orchestrator | 2026-02-05 00:55:28 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:28.660321 | orchestrator | 2026-02-05 00:55:28 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:28.661833 | orchestrator | 2026-02-05 00:55:28 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:28.662250 | orchestrator | 2026-02-05 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:31.707364 | orchestrator | 2026-02-05 00:55:31 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:31.710173 | orchestrator | 2026-02-05 00:55:31 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:31.711703 | orchestrator | 2026-02-05 00:55:31 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:31.711745 | orchestrator | 2026-02-05 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:34.758102 | orchestrator | 2026-02-05 00:55:34 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:34.760407 | orchestrator | 2026-02-05 00:55:34 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:34.762883 | orchestrator | 2026-02-05 00:55:34 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:34.762942 | orchestrator | 2026-02-05 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:37.800880 | orchestrator | 2026-02-05 00:55:37 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:37.803182 | orchestrator | 2026-02-05 00:55:37 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:37.805414 | orchestrator | 2026-02-05 00:55:37 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:37.805446 | orchestrator | 2026-02-05 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:40.847237 | orchestrator | 2026-02-05 00:55:40 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:40.849698 | orchestrator | 2026-02-05 00:55:40 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:40.852968 | orchestrator | 2026-02-05 00:55:40 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:40.853034 | orchestrator | 2026-02-05 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:43.912390 | orchestrator | 2026-02-05 00:55:43 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:43.916724 | orchestrator | 2026-02-05 00:55:43 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:43.920275 | orchestrator | 2026-02-05 00:55:43 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:43.920324 | orchestrator | 2026-02-05 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:46.974130 | orchestrator | 2026-02-05 00:55:46 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:46.976890 | orchestrator | 2026-02-05 00:55:46 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:46.979797 | orchestrator | 2026-02-05 00:55:46 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:46.980505 | orchestrator | 2026-02-05 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:50.029572 | orchestrator | 2026-02-05 00:55:50 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:50.031807 | orchestrator | 2026-02-05 00:55:50 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:50.033199 | orchestrator | 2026-02-05 00:55:50 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:50.033262 | orchestrator | 2026-02-05 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:53.081843 | orchestrator | 2026-02-05 00:55:53 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:53.083930 | orchestrator | 2026-02-05 00:55:53 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:53.085302 | orchestrator | 2026-02-05 00:55:53 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:53.085345 | orchestrator | 2026-02-05 00:55:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:56.124208 | orchestrator | 2026-02-05 00:55:56 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:56.125941 | orchestrator | 2026-02-05 00:55:56 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:56.126872 | orchestrator | 2026-02-05 00:55:56 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:56.127239 | orchestrator | 2026-02-05 00:55:56 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:55:59.171525 | orchestrator | 2026-02-05 00:55:59 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:55:59.173194 | orchestrator | 2026-02-05 00:55:59 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:55:59.175740 | orchestrator | 2026-02-05 00:55:59 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:55:59.175780 | orchestrator | 2026-02-05 00:55:59 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:02.219512 | orchestrator | 2026-02-05 00:56:02 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:02.224214 | orchestrator | 2026-02-05 00:56:02 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:56:02.225962 | orchestrator | 2026-02-05 00:56:02 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:02.226246 | orchestrator | 2026-02-05 00:56:02 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:05.265059 | orchestrator | 2026-02-05 00:56:05 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:05.266246 | orchestrator | 2026-02-05 00:56:05 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:56:05.267476 | orchestrator | 2026-02-05 00:56:05 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:05.267514 | orchestrator | 2026-02-05 00:56:05 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:08.303349 | orchestrator | 2026-02-05 00:56:08 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:08.305277 | orchestrator | 2026-02-05 00:56:08 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:56:08.306946 | orchestrator | 2026-02-05 00:56:08 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:08.307001 | orchestrator | 2026-02-05 00:56:08 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:11.344271 | orchestrator | 2026-02-05 00:56:11 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:11.345874 | orchestrator | 2026-02-05 00:56:11 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:56:11.347697 | orchestrator | 2026-02-05 00:56:11 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:11.347743 | orchestrator | 2026-02-05 00:56:11 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:14.398283 | orchestrator | 2026-02-05 00:56:14 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:14.400267 | orchestrator | 2026-02-05 00:56:14 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:56:14.402218 | orchestrator | 2026-02-05 00:56:14 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:14.402278 | orchestrator | 2026-02-05 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:17.452894 | orchestrator | 2026-02-05 00:56:17 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:17.454750 | orchestrator | 2026-02-05 00:56:17 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:56:17.457276 | orchestrator | 2026-02-05 00:56:17 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:17.457756 | orchestrator | 2026-02-05 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:20.513693 | orchestrator | 2026-02-05 00:56:20 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:20.518192 | orchestrator | 2026-02-05 00:56:20 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:56:20.520872 | orchestrator | 2026-02-05 00:56:20 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:20.520929 | orchestrator | 2026-02-05 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:23.564960 | orchestrator | 2026-02-05 00:56:23 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:23.565623 | orchestrator | 2026-02-05 00:56:23 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:56:23.567096 | orchestrator | 2026-02-05 00:56:23 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:23.567152 | orchestrator | 2026-02-05 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:26.606230 | orchestrator | 2026-02-05 00:56:26 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:26.607833 | orchestrator | 2026-02-05 00:56:26 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:56:26.609642 | orchestrator | 2026-02-05 00:56:26 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:26.609760 | orchestrator | 2026-02-05 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:29.656164 | orchestrator | 2026-02-05 00:56:29 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:29.657452 | orchestrator | 2026-02-05 00:56:29 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:56:29.659477 | orchestrator | 2026-02-05 00:56:29 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:29.659594 | orchestrator | 2026-02-05 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:32.699806 | orchestrator | 2026-02-05 00:56:32 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:32.702329 | orchestrator | 2026-02-05 00:56:32 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:56:32.704535 | orchestrator | 2026-02-05 00:56:32 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:32.704577 | orchestrator | 2026-02-05 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:35.750645 | orchestrator | 2026-02-05 00:56:35 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:35.752209 | orchestrator | 2026-02-05 00:56:35 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:56:35.755492 | orchestrator | 2026-02-05 00:56:35 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:35.756078 | orchestrator | 2026-02-05 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:38.802483 | orchestrator | 2026-02-05 00:56:38 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:38.803775 | orchestrator | 2026-02-05 00:56:38 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:56:38.804944 | orchestrator | 2026-02-05 00:56:38 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:38.804979 | orchestrator | 2026-02-05 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:41.851847 | orchestrator | 2026-02-05 00:56:41 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:41.854517 | orchestrator | 2026-02-05 00:56:41 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:56:41.856583 | orchestrator | 2026-02-05 00:56:41 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:41.856642 | orchestrator | 2026-02-05 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:44.904596 | orchestrator | 2026-02-05 00:56:44 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:44.905430 | orchestrator | 2026-02-05 00:56:44 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state STARTED 2026-02-05 00:56:44.907418 | orchestrator | 2026-02-05 00:56:44 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:44.907473 | orchestrator | 2026-02-05 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:47.955678 | orchestrator | 2026-02-05 00:56:47 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:47.967788 | orchestrator | 2026-02-05 00:56:47 | INFO  | Task daa69367-8355-4d51-a80f-838e2986c19d is in state SUCCESS 2026-02-05 00:56:47.969702 | orchestrator | 2026-02-05 00:56:47.969772 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-05 00:56:47.969797 | orchestrator | 2.16.14 2026-02-05 00:56:47.969805 | orchestrator | 2026-02-05 00:56:47.969812 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-05 00:56:47.969845 | orchestrator | 2026-02-05 00:56:47.969853 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 00:56:47.969859 | orchestrator | Thursday 05 February 2026 00:45:58 +0000 (0:00:00.753) 0:00:00.753 ***** 2026-02-05 00:56:47.969867 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.969874 | orchestrator | 2026-02-05 00:56:47.969880 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 00:56:47.969902 | orchestrator | Thursday 05 February 2026 00:45:59 +0000 (0:00:01.069) 0:00:01.822 ***** 2026-02-05 00:56:47.969908 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.969915 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.969921 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.969928 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.969934 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.969940 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.969947 | orchestrator | 2026-02-05 00:56:47.969953 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 00:56:47.969959 | orchestrator | Thursday 05 February 2026 00:46:00 +0000 (0:00:01.579) 0:00:03.402 ***** 2026-02-05 00:56:47.969966 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.969972 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.970001 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.970008 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.970105 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.970113 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.970120 | orchestrator | 2026-02-05 00:56:47.970127 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 00:56:47.970133 | orchestrator | Thursday 05 February 2026 00:46:02 +0000 (0:00:01.122) 0:00:04.525 ***** 2026-02-05 00:56:47.970140 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.970146 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.970153 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.970160 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.970167 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.970173 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.970180 | orchestrator | 2026-02-05 00:56:47.970187 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 00:56:47.970193 | orchestrator | Thursday 05 February 2026 00:46:02 +0000 (0:00:00.957) 0:00:05.482 ***** 2026-02-05 00:56:47.970200 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.970207 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.970213 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.970219 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.970226 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.970233 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.970239 | orchestrator | 2026-02-05 00:56:47.970246 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 00:56:47.970253 | orchestrator | Thursday 05 February 2026 00:46:03 +0000 (0:00:00.722) 0:00:06.204 ***** 2026-02-05 00:56:47.970260 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.970267 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.970274 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.970280 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.970287 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.970294 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.970300 | orchestrator | 2026-02-05 00:56:47.970307 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 00:56:47.970314 | orchestrator | Thursday 05 February 2026 00:46:04 +0000 (0:00:00.646) 0:00:06.851 ***** 2026-02-05 00:56:47.970321 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.970337 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.970344 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.970351 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.970357 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.970364 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.970372 | orchestrator | 2026-02-05 00:56:47.970379 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 00:56:47.970386 | orchestrator | Thursday 05 February 2026 00:46:05 +0000 (0:00:01.034) 0:00:07.885 ***** 2026-02-05 00:56:47.970393 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.970400 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.970407 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.970438 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.970444 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.970451 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.970457 | orchestrator | 2026-02-05 00:56:47.970464 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 00:56:47.970471 | orchestrator | Thursday 05 February 2026 00:46:06 +0000 (0:00:00.934) 0:00:08.820 ***** 2026-02-05 00:56:47.970478 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.970485 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.970492 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.970498 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.970505 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.970511 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.970518 | orchestrator | 2026-02-05 00:56:47.970524 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 00:56:47.970531 | orchestrator | Thursday 05 February 2026 00:46:07 +0000 (0:00:00.841) 0:00:09.662 ***** 2026-02-05 00:56:47.970537 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 00:56:47.970544 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:56:47.970551 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:56:47.970558 | orchestrator | 2026-02-05 00:56:47.970565 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 00:56:47.970572 | orchestrator | Thursday 05 February 2026 00:46:07 +0000 (0:00:00.515) 0:00:10.177 ***** 2026-02-05 00:56:47.970579 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.970586 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.970592 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.970610 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.970616 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.970623 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.970629 | orchestrator | 2026-02-05 00:56:47.970635 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 00:56:47.970642 | orchestrator | Thursday 05 February 2026 00:46:09 +0000 (0:00:01.341) 0:00:11.518 ***** 2026-02-05 00:56:47.970648 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 00:56:47.970682 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:56:47.970689 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:56:47.970696 | orchestrator | 2026-02-05 00:56:47.970702 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 00:56:47.970710 | orchestrator | Thursday 05 February 2026 00:46:11 +0000 (0:00:02.193) 0:00:13.712 ***** 2026-02-05 00:56:47.970717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 00:56:47.970723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 00:56:47.970730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 00:56:47.970736 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.970743 | orchestrator | 2026-02-05 00:56:47.970749 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 00:56:47.970761 | orchestrator | Thursday 05 February 2026 00:46:11 +0000 (0:00:00.339) 0:00:14.051 ***** 2026-02-05 00:56:47.970773 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.970781 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.970787 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.970794 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.970800 | orchestrator | 2026-02-05 00:56:47.970807 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 00:56:47.970814 | orchestrator | Thursday 05 February 2026 00:46:12 +0000 (0:00:00.771) 0:00:14.822 ***** 2026-02-05 00:56:47.970821 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.970830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.970837 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.970915 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.970924 | orchestrator | 2026-02-05 00:56:47.970951 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 00:56:47.970958 | orchestrator | Thursday 05 February 2026 00:46:12 +0000 (0:00:00.165) 0:00:14.988 ***** 2026-02-05 00:56:47.970972 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 00:46:09.699699', 'end': '2026-02-05 00:46:09.795074', 'delta': '0:00:00.095375', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.970981 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 00:46:10.446643', 'end': '2026-02-05 00:46:10.550120', 'delta': '0:00:00.103477', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.970996 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 00:46:10.985874', 'end': '2026-02-05 00:46:11.084310', 'delta': '0:00:00.098436', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.971003 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.971010 | orchestrator | 2026-02-05 00:56:47.971016 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 00:56:47.971023 | orchestrator | Thursday 05 February 2026 00:46:12 +0000 (0:00:00.493) 0:00:15.482 ***** 2026-02-05 00:56:47.971030 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.971037 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.971086 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.971093 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.971099 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.971106 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.971112 | orchestrator | 2026-02-05 00:56:47.971119 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 00:56:47.971126 | orchestrator | Thursday 05 February 2026 00:46:14 +0000 (0:00:01.883) 0:00:17.365 ***** 2026-02-05 00:56:47.971132 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:56:47.971139 | orchestrator | 2026-02-05 00:56:47.971147 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 00:56:47.971154 | orchestrator | Thursday 05 February 2026 00:46:15 +0000 (0:00:00.585) 0:00:17.950 ***** 2026-02-05 00:56:47.971160 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.971167 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.971173 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.971180 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.971186 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.971193 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.971199 | orchestrator | 2026-02-05 00:56:47.971206 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 00:56:47.971213 | orchestrator | Thursday 05 February 2026 00:46:16 +0000 (0:00:00.958) 0:00:18.908 ***** 2026-02-05 00:56:47.971220 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.971226 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.971233 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.971239 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.971246 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.971253 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.971259 | orchestrator | 2026-02-05 00:56:47.971266 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 00:56:47.971272 | orchestrator | Thursday 05 February 2026 00:46:18 +0000 (0:00:02.124) 0:00:21.033 ***** 2026-02-05 00:56:47.971279 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.971285 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.971292 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.971298 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.971305 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.971316 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.971323 | orchestrator | 2026-02-05 00:56:47.971330 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 00:56:47.971337 | orchestrator | Thursday 05 February 2026 00:46:19 +0000 (0:00:00.964) 0:00:21.998 ***** 2026-02-05 00:56:47.971344 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.971351 | orchestrator | 2026-02-05 00:56:47.971357 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 00:56:47.971364 | orchestrator | Thursday 05 February 2026 00:46:19 +0000 (0:00:00.246) 0:00:22.244 ***** 2026-02-05 00:56:47.971370 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.971377 | orchestrator | 2026-02-05 00:56:47.971384 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 00:56:47.971390 | orchestrator | Thursday 05 February 2026 00:46:20 +0000 (0:00:00.256) 0:00:22.500 ***** 2026-02-05 00:56:47.971397 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.971403 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.971411 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.971422 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.971428 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.971435 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.971441 | orchestrator | 2026-02-05 00:56:47.971448 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 00:56:47.971454 | orchestrator | Thursday 05 February 2026 00:46:20 +0000 (0:00:00.659) 0:00:23.160 ***** 2026-02-05 00:56:47.971461 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.971468 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.971475 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.971482 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.971488 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.971495 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.971501 | orchestrator | 2026-02-05 00:56:47.971508 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 00:56:47.971515 | orchestrator | Thursday 05 February 2026 00:46:23 +0000 (0:00:03.179) 0:00:26.340 ***** 2026-02-05 00:56:47.971521 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.971528 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.971534 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.971541 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.971547 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.971554 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.971560 | orchestrator | 2026-02-05 00:56:47.971567 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 00:56:47.971573 | orchestrator | Thursday 05 February 2026 00:46:26 +0000 (0:00:02.316) 0:00:28.656 ***** 2026-02-05 00:56:47.971580 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.971587 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.971597 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.971604 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.971680 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.971688 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.971694 | orchestrator | 2026-02-05 00:56:47.971701 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 00:56:47.971708 | orchestrator | Thursday 05 February 2026 00:46:27 +0000 (0:00:01.481) 0:00:30.138 ***** 2026-02-05 00:56:47.971744 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.971751 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.971758 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.971764 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.971771 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.971777 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.971783 | orchestrator | 2026-02-05 00:56:47.971790 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 00:56:47.971803 | orchestrator | Thursday 05 February 2026 00:46:28 +0000 (0:00:01.044) 0:00:31.182 ***** 2026-02-05 00:56:47.971810 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.971816 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.971823 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.971829 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.971853 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.971859 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.971866 | orchestrator | 2026-02-05 00:56:47.971873 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 00:56:47.971879 | orchestrator | Thursday 05 February 2026 00:46:29 +0000 (0:00:00.928) 0:00:32.111 ***** 2026-02-05 00:56:47.971886 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.971893 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.971899 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.971905 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.971912 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.971918 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.971924 | orchestrator | 2026-02-05 00:56:47.971931 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 00:56:47.971937 | orchestrator | Thursday 05 February 2026 00:46:30 +0000 (0:00:01.239) 0:00:33.350 ***** 2026-02-05 00:56:47.971944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9bc271eb--ec29--52a2--8b95--ff4dfb27e19f-osd--block--9bc271eb--ec29--52a2--8b95--ff4dfb27e19f', 'dm-uuid-LVM-W6nJ7ENqG04Qc7VCQLGpY2qnV5YhUZsM9A2LJ1qCPfepxWi2YXgpPfnxICTyGXCK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.971952 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1b54f13f--3e23--5303--9525--7c2d84d571dd-osd--block--1b54f13f--3e23--5303--9525--7c2d84d571dd', 'dm-uuid-LVM-m0K1q4L1OkOvOG4NeS8BTL15y4z5NEn9UFn3b4FqGIYzR4nbwul6S35G1g1RcetS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.971964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.971971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.971978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.971988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.971998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.972005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50aca8a8--e8e5--56ca--ab64--02beaf30ee0c-osd--block--50aca8a8--e8e5--56ca--ab64--02beaf30ee0c', 'dm-uuid-LVM-l5wfutUVY3Mjb8LJUgAGN63VEFe8QeDcgf1NL2jk6HPybKIKRq4gPQh2wIOxCEWz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.972012 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.972018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a29ad6cb--22eb--5988--a460--3c83981a9937-osd--block--a29ad6cb--22eb--5988--a460--3c83981a9937', 'dm-uuid-LVM-epSIr36ljuUUSxb0VExFke7F2vw1BxjalkkiqCKp0dPNXTyo0YKF4XVaW2IuH5iy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.972026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.972036 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.972043 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.972050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.972062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.972069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.972148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.972156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.972163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.972177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.972191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.972214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part1', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part14', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part15', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part16', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.972243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--50aca8a8--e8e5--56ca--ab64--02beaf30ee0c-osd--block--50aca8a8--e8e5--56ca--ab64--02beaf30ee0c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XprKlX-LPsf-tmNA-oMKm-4JA4-WUIg-hPQ0uF', 'scsi-0QEMU_QEMU_HARDDISK_df9dffbb-fa4a-4614-acfc-458aacc61e85', 'scsi-SQEMU_QEMU_HARDDISK_df9dffbb-fa4a-4614-acfc-458aacc61e85'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.972254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a29ad6cb--22eb--5988--a460--3c83981a9937-osd--block--a29ad6cb--22eb--5988--a460--3c83981a9937'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Bg2j1X-hOvO-h8sZ-lUHD-353c-2KrO-hqtt9F', 'scsi-0QEMU_QEMU_HARDDISK_36110d5e-3998-4d39-b163-f137840d584a', 'scsi-SQEMU_QEMU_HARDDISK_36110d5e-3998-4d39-b163-f137840d584a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.972268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9bc271eb--ec29--52a2--8b95--ff4dfb27e19f-osd--block--9bc271eb--ec29--52a2--8b95--ff4dfb27e19f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IjRVxd-4RY4-7Ai2-bA1z-fs6i-PQm0-O7Xwvo', 'scsi-0QEMU_QEMU_HARDDISK_b7bd6d63-837c-4716-bacc-a146e68be59b', 'scsi-SQEMU_QEMU_HARDDISK_b7bd6d63-837c-4716-bacc-a146e68be59b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.972275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba105820-b7fd-4d06-b751-3e65d5700a2c', 'scsi-SQEMU_QEMU_HARDDISK_ba105820-b7fd-4d06-b751-3e65d5700a2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.972283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.972290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44714651--8fa8--5efe--842f--d8a32b49e267-osd--block--44714651--8fa8--5efe--842f--d8a32b49e267', 'dm-uuid-LVM-dnumPSiW5Qzo4z08hu51ndJXedhPfJ0xnavZvT8fOc4BESdC6y5GDXreFD41aFjQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1b54f13f--3e23--5303--9525--7c2d84d571dd-osd--block--1b54f13f--3e23--5303--9525--7c2d84d571dd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HldCCt-GzaL-8wFL-FznN-K21O-j0j1-Ru1MgY', 'scsi-0QEMU_QEMU_HARDDISK_ea1b8944-91e5-47d3-baee-befb07fac7f3', 'scsi-SQEMU_QEMU_HARDDISK_ea1b8944-91e5-47d3-baee-befb07fac7f3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.973229 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--56069e6e--1b0b--5c3d--aabe--9f5e4e37a685-osd--block--56069e6e--1b0b--5c3d--aabe--9f5e4e37a685', 'dm-uuid-LVM-uidyyMD0HIQLmBR883qUZvI9z5lRQwQARuTCVNfpjHkjGfbH83dR6eQ4ZGxNkWE7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_65c05e60-3149-4d51-82d7-128e0fd85726', 'scsi-SQEMU_QEMU_HARDDISK_65c05e60-3149-4d51-82d7-128e0fd85726'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.973249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.973314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part1', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part14', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part15', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part16', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.973364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--44714651--8fa8--5efe--842f--d8a32b49e267-osd--block--44714651--8fa8--5efe--842f--d8a32b49e267'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UY5He4-ZO5Z-2Q7f-bsPy-bRbE-i0JZ-CnlGio', 'scsi-0QEMU_QEMU_HARDDISK_5c16bdfb-9776-4282-a52f-d0746538d24f', 'scsi-SQEMU_QEMU_HARDDISK_5c16bdfb-9776-4282-a52f-d0746538d24f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.973378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--56069e6e--1b0b--5c3d--aabe--9f5e4e37a685-osd--block--56069e6e--1b0b--5c3d--aabe--9f5e4e37a685'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-m1fmTw-heZ7-Ss0N-4Ikk-0ZW8-w1Ji-pHvzZ4', 'scsi-0QEMU_QEMU_HARDDISK_66450c46-76da-4fbd-b0f3-00f2a07ceccd', 'scsi-SQEMU_QEMU_HARDDISK_66450c46-76da-4fbd-b0f3-00f2a07ceccd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.973385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13446d2e-9611-4725-bf6d-ec20aba1d1c7', 'scsi-SQEMU_QEMU_HARDDISK_13446d2e-9611-4725-bf6d-ec20aba1d1c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.973392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.973399 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.973407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96', 'scsi-SQEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96-part1', 'scsi-SQEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96-part14', 'scsi-SQEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96-part15', 'scsi-SQEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96-part16', 'scsi-SQEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.973487 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.973498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.973505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb', 'scsi-SQEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb-part1', 'scsi-SQEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb-part14', 'scsi-SQEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb-part15', 'scsi-SQEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb-part16', 'scsi-SQEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.973584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.973591 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.973598 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.973605 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.973612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:56:47.973710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b', 'scsi-SQEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b-part1', 'scsi-SQEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b-part14', 'scsi-SQEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b-part15', 'scsi-SQEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b-part16', 'scsi-SQEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.973728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:56:47.973736 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.973742 | orchestrator | 2026-02-05 00:56:47.973750 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 00:56:47.973757 | orchestrator | Thursday 05 February 2026 00:46:32 +0000 (0:00:02.117) 0:00:35.467 ***** 2026-02-05 00:56:47.973768 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9bc271eb--ec29--52a2--8b95--ff4dfb27e19f-osd--block--9bc271eb--ec29--52a2--8b95--ff4dfb27e19f', 'dm-uuid-LVM-W6nJ7ENqG04Qc7VCQLGpY2qnV5YhUZsM9A2LJ1qCPfepxWi2YXgpPfnxICTyGXCK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973775 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1b54f13f--3e23--5303--9525--7c2d84d571dd-osd--block--1b54f13f--3e23--5303--9525--7c2d84d571dd', 'dm-uuid-LVM-m0K1q4L1OkOvOG4NeS8BTL15y4z5NEn9UFn3b4FqGIYzR4nbwul6S35G1g1RcetS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973782 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973789 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973800 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973813 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973820 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973836 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973843 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973859 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973869 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9bc271eb--ec29--52a2--8b95--ff4dfb27e19f-osd--block--9bc271eb--ec29--52a2--8b95--ff4dfb27e19f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IjRVxd-4RY4-7Ai2-bA1z-fs6i-PQm0-O7Xwvo', 'scsi-0QEMU_QEMU_HARDDISK_b7bd6d63-837c-4716-bacc-a146e68be59b', 'scsi-SQEMU_QEMU_HARDDISK_b7bd6d63-837c-4716-bacc-a146e68be59b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973878 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1b54f13f--3e23--5303--9525--7c2d84d571dd-osd--block--1b54f13f--3e23--5303--9525--7c2d84d571dd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HldCCt-GzaL-8wFL-FznN-K21O-j0j1-Ru1MgY', 'scsi-0QEMU_QEMU_HARDDISK_ea1b8944-91e5-47d3-baee-befb07fac7f3', 'scsi-SQEMU_QEMU_HARDDISK_ea1b8944-91e5-47d3-baee-befb07fac7f3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973889 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_65c05e60-3149-4d51-82d7-128e0fd85726', 'scsi-SQEMU_QEMU_HARDDISK_65c05e60-3149-4d51-82d7-128e0fd85726'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973901 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973911 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50aca8a8--e8e5--56ca--ab64--02beaf30ee0c-osd--block--50aca8a8--e8e5--56ca--ab64--02beaf30ee0c', 'dm-uuid-LVM-l5wfutUVY3Mjb8LJUgAGN63VEFe8QeDcgf1NL2jk6HPybKIKRq4gPQh2wIOxCEWz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973918 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a29ad6cb--22eb--5988--a460--3c83981a9937-osd--block--a29ad6cb--22eb--5988--a460--3c83981a9937', 'dm-uuid-LVM-epSIr36ljuUUSxb0VExFke7F2vw1BxjalkkiqCKp0dPNXTyo0YKF4XVaW2IuH5iy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973925 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973935 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973942 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973952 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973959 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973969 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973976 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973983 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.973993 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.974006 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part1', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part14', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part15', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part16', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974046 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--50aca8a8--e8e5--56ca--ab64--02beaf30ee0c-osd--block--50aca8a8--e8e5--56ca--ab64--02beaf30ee0c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XprKlX-LPsf-tmNA-oMKm-4JA4-WUIg-hPQ0uF', 'scsi-0QEMU_QEMU_HARDDISK_df9dffbb-fa4a-4614-acfc-458aacc61e85', 'scsi-SQEMU_QEMU_HARDDISK_df9dffbb-fa4a-4614-acfc-458aacc61e85'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974055 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a29ad6cb--22eb--5988--a460--3c83981a9937-osd--block--a29ad6cb--22eb--5988--a460--3c83981a9937'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Bg2j1X-hOvO-h8sZ-lUHD-353c-2KrO-hqtt9F', 'scsi-0QEMU_QEMU_HARDDISK_36110d5e-3998-4d39-b163-f137840d584a', 'scsi-SQEMU_QEMU_HARDDISK_36110d5e-3998-4d39-b163-f137840d584a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974066 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974077 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba105820-b7fd-4d06-b751-3e65d5700a2c', 'scsi-SQEMU_QEMU_HARDDISK_ba105820-b7fd-4d06-b751-3e65d5700a2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974084 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44714651--8fa8--5efe--842f--d8a32b49e267-osd--block--44714651--8fa8--5efe--842f--d8a32b49e267', 'dm-uuid-LVM-dnumPSiW5Qzo4z08hu51ndJXedhPfJ0xnavZvT8fOc4BESdC6y5GDXreFD41aFjQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974093 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974100 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974135 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974143 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974154 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974161 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--56069e6e--1b0b--5c3d--aabe--9f5e4e37a685-osd--block--56069e6e--1b0b--5c3d--aabe--9f5e4e37a685', 'dm-uuid-LVM-uidyyMD0HIQLmBR883qUZvI9z5lRQwQARuTCVNfpjHkjGfbH83dR6eQ4ZGxNkWE7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974174 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974182 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.974190 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974203 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974212 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974219 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974231 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974241 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974248 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974259 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974266 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974273 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974284 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974291 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974302 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974313 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974321 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974328 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974343 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b', 'scsi-SQEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b-part1', 'scsi-SQEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b-part14', 'scsi-SQEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b-part15', 'scsi-SQEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b-part16', 'scsi-SQEMU_QEMU_HARDDISK_3d892bf8-de40-4598-8b0b-6c2cde83153b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974354 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974361 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974368 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-03-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974383 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb', 'scsi-SQEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb-part1', 'scsi-SQEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb-part14', 'scsi-SQEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb-part15', 'scsi-SQEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb-part16', 'scsi-SQEMU_QEMU_HARDDISK_9c9a958b-8321-4f20-9f5f-ff253bc6c7cb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974395 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974401 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.974408 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974415 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-03-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974422 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.974442 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96', 'scsi-SQEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96-part1', 'scsi-SQEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96-part14', 'scsi-SQEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96-part15', 'scsi-SQEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96-part16', 'scsi-SQEMU_QEMU_HARDDISK_bf2657f2-50ae-42ce-bd69-1f7fd81f5d96-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974454 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-03-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974461 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.974468 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974475 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974486 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974494 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974507 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974515 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974526 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part1', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part14', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part15', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part16', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974536 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--44714651--8fa8--5efe--842f--d8a32b49e267-osd--block--44714651--8fa8--5efe--842f--d8a32b49e267'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UY5He4-ZO5Z-2Q7f-bsPy-bRbE-i0JZ-CnlGio', 'scsi-0QEMU_QEMU_HARDDISK_5c16bdfb-9776-4282-a52f-d0746538d24f', 'scsi-SQEMU_QEMU_HARDDISK_5c16bdfb-9776-4282-a52f-d0746538d24f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974550 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--56069e6e--1b0b--5c3d--aabe--9f5e4e37a685-osd--block--56069e6e--1b0b--5c3d--aabe--9f5e4e37a685'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-m1fmTw-heZ7-Ss0N-4Ikk-0ZW8-w1Ji-pHvzZ4', 'scsi-0QEMU_QEMU_HARDDISK_66450c46-76da-4fbd-b0f3-00f2a07ceccd', 'scsi-SQEMU_QEMU_HARDDISK_66450c46-76da-4fbd-b0f3-00f2a07ceccd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974557 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13446d2e-9611-4725-bf6d-ec20aba1d1c7', 'scsi-SQEMU_QEMU_HARDDISK_13446d2e-9611-4725-bf6d-ec20aba1d1c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974564 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:56:47.974572 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.974578 | orchestrator | 2026-02-05 00:56:47.974589 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 00:56:47.974596 | orchestrator | Thursday 05 February 2026 00:46:34 +0000 (0:00:01.411) 0:00:36.879 ***** 2026-02-05 00:56:47.974603 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.974610 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.974617 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.974623 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.974630 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.974640 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.974646 | orchestrator | 2026-02-05 00:56:47.974662 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 00:56:47.974670 | orchestrator | Thursday 05 February 2026 00:46:35 +0000 (0:00:01.566) 0:00:38.445 ***** 2026-02-05 00:56:47.974676 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.974683 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.974689 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.974695 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.974701 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.974708 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.974714 | orchestrator | 2026-02-05 00:56:47.974721 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 00:56:47.974728 | orchestrator | Thursday 05 February 2026 00:46:36 +0000 (0:00:00.923) 0:00:39.369 ***** 2026-02-05 00:56:47.974734 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.974740 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.974747 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.974753 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.974760 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.974766 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.974772 | orchestrator | 2026-02-05 00:56:47.974782 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 00:56:47.974788 | orchestrator | Thursday 05 February 2026 00:46:37 +0000 (0:00:00.979) 0:00:40.348 ***** 2026-02-05 00:56:47.974795 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.974801 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.974808 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.974825 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.974832 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.974837 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.974843 | orchestrator | 2026-02-05 00:56:47.974849 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 00:56:47.974855 | orchestrator | Thursday 05 February 2026 00:46:38 +0000 (0:00:00.622) 0:00:40.971 ***** 2026-02-05 00:56:47.974860 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.974871 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.974885 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.974898 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.974910 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.974924 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.974938 | orchestrator | 2026-02-05 00:56:47.974951 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 00:56:47.974965 | orchestrator | Thursday 05 February 2026 00:46:39 +0000 (0:00:01.103) 0:00:42.074 ***** 2026-02-05 00:56:47.974972 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.974977 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.974984 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.974991 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.974997 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.975003 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.975010 | orchestrator | 2026-02-05 00:56:47.975016 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 00:56:47.975023 | orchestrator | Thursday 05 February 2026 00:46:40 +0000 (0:00:01.125) 0:00:43.200 ***** 2026-02-05 00:56:47.975029 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-05 00:56:47.975036 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-05 00:56:47.975042 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-05 00:56:47.975049 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-05 00:56:47.975055 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-05 00:56:47.975062 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 00:56:47.975068 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-05 00:56:47.975081 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-05 00:56:47.975087 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-05 00:56:47.975093 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-05 00:56:47.975100 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-05 00:56:47.975106 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-05 00:56:47.975113 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-05 00:56:47.975119 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-05 00:56:47.975125 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-05 00:56:47.975132 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-05 00:56:47.975139 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-05 00:56:47.975144 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-05 00:56:47.975150 | orchestrator | 2026-02-05 00:56:47.975157 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 00:56:47.975164 | orchestrator | Thursday 05 February 2026 00:46:43 +0000 (0:00:03.054) 0:00:46.254 ***** 2026-02-05 00:56:47.975170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 00:56:47.975177 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 00:56:47.975183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 00:56:47.975189 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.975196 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-05 00:56:47.975202 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-05 00:56:47.975209 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-05 00:56:47.975215 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-05 00:56:47.975227 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-05 00:56:47.975234 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-05 00:56:47.975240 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.975247 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 00:56:47.975253 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 00:56:47.975259 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 00:56:47.975266 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.975272 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-05 00:56:47.975278 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-05 00:56:47.975284 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-05 00:56:47.975291 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.975297 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.975303 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-05 00:56:47.975310 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-05 00:56:47.975316 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-05 00:56:47.975322 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.975329 | orchestrator | 2026-02-05 00:56:47.975335 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 00:56:47.975342 | orchestrator | Thursday 05 February 2026 00:46:44 +0000 (0:00:00.892) 0:00:47.147 ***** 2026-02-05 00:56:47.975348 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.975354 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.975365 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.975372 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.975378 | orchestrator | 2026-02-05 00:56:47.975385 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 00:56:47.975396 | orchestrator | Thursday 05 February 2026 00:46:46 +0000 (0:00:01.341) 0:00:48.488 ***** 2026-02-05 00:56:47.975402 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.975409 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.975415 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.975421 | orchestrator | 2026-02-05 00:56:47.975428 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 00:56:47.975434 | orchestrator | Thursday 05 February 2026 00:46:46 +0000 (0:00:00.252) 0:00:48.740 ***** 2026-02-05 00:56:47.975440 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.975447 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.975453 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.975460 | orchestrator | 2026-02-05 00:56:47.975466 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 00:56:47.975472 | orchestrator | Thursday 05 February 2026 00:46:46 +0000 (0:00:00.323) 0:00:49.064 ***** 2026-02-05 00:56:47.975479 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.975485 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.975491 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.975498 | orchestrator | 2026-02-05 00:56:47.975504 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 00:56:47.975511 | orchestrator | Thursday 05 February 2026 00:46:47 +0000 (0:00:00.488) 0:00:49.552 ***** 2026-02-05 00:56:47.975517 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.975523 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.975530 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.975536 | orchestrator | 2026-02-05 00:56:47.975543 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 00:56:47.975549 | orchestrator | Thursday 05 February 2026 00:46:47 +0000 (0:00:00.626) 0:00:50.179 ***** 2026-02-05 00:56:47.975555 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:56:47.975562 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:56:47.975568 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:56:47.975574 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.975581 | orchestrator | 2026-02-05 00:56:47.975587 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 00:56:47.975593 | orchestrator | Thursday 05 February 2026 00:46:48 +0000 (0:00:00.311) 0:00:50.491 ***** 2026-02-05 00:56:47.975599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:56:47.975606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:56:47.975612 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:56:47.975619 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.975625 | orchestrator | 2026-02-05 00:56:47.975631 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 00:56:47.975638 | orchestrator | Thursday 05 February 2026 00:46:48 +0000 (0:00:00.428) 0:00:50.919 ***** 2026-02-05 00:56:47.975644 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:56:47.975650 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:56:47.975683 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:56:47.975690 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.975697 | orchestrator | 2026-02-05 00:56:47.975704 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 00:56:47.975710 | orchestrator | Thursday 05 February 2026 00:46:48 +0000 (0:00:00.333) 0:00:51.253 ***** 2026-02-05 00:56:47.975717 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.975723 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.975730 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.975736 | orchestrator | 2026-02-05 00:56:47.975743 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 00:56:47.975749 | orchestrator | Thursday 05 February 2026 00:46:49 +0000 (0:00:00.279) 0:00:51.533 ***** 2026-02-05 00:56:47.975760 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-05 00:56:47.975767 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-05 00:56:47.975779 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 00:56:47.975785 | orchestrator | 2026-02-05 00:56:47.975792 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 00:56:47.975799 | orchestrator | Thursday 05 February 2026 00:46:50 +0000 (0:00:01.110) 0:00:52.644 ***** 2026-02-05 00:56:47.975806 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 00:56:47.975812 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:56:47.975819 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:56:47.975825 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-05 00:56:47.975832 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 00:56:47.975839 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 00:56:47.975861 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 00:56:47.975867 | orchestrator | 2026-02-05 00:56:47.975874 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 00:56:47.975880 | orchestrator | Thursday 05 February 2026 00:46:51 +0000 (0:00:01.448) 0:00:54.092 ***** 2026-02-05 00:56:47.975886 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 00:56:47.975896 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:56:47.975903 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:56:47.975909 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-05 00:56:47.975916 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 00:56:47.975922 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 00:56:47.975928 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 00:56:47.975935 | orchestrator | 2026-02-05 00:56:47.975941 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 00:56:47.975948 | orchestrator | Thursday 05 February 2026 00:46:53 +0000 (0:00:01.706) 0:00:55.799 ***** 2026-02-05 00:56:47.975954 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-5, testbed-node-4, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.975961 | orchestrator | 2026-02-05 00:56:47.975968 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 00:56:47.975975 | orchestrator | Thursday 05 February 2026 00:46:54 +0000 (0:00:01.302) 0:00:57.101 ***** 2026-02-05 00:56:47.975981 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.975987 | orchestrator | 2026-02-05 00:56:47.975994 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 00:56:47.976000 | orchestrator | Thursday 05 February 2026 00:46:55 +0000 (0:00:01.071) 0:00:58.173 ***** 2026-02-05 00:56:47.976006 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.976013 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.976019 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.976025 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.976032 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.976038 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.976044 | orchestrator | 2026-02-05 00:56:47.976051 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 00:56:47.976057 | orchestrator | Thursday 05 February 2026 00:46:56 +0000 (0:00:01.108) 0:00:59.282 ***** 2026-02-05 00:56:47.976067 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.976073 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.976079 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.976085 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.976091 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.976098 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.976104 | orchestrator | 2026-02-05 00:56:47.976110 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 00:56:47.976117 | orchestrator | Thursday 05 February 2026 00:46:57 +0000 (0:00:00.862) 0:01:00.144 ***** 2026-02-05 00:56:47.976123 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.976129 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.976136 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.976142 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.976148 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.976155 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.976161 | orchestrator | 2026-02-05 00:56:47.976167 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 00:56:47.976174 | orchestrator | Thursday 05 February 2026 00:46:58 +0000 (0:00:01.043) 0:01:01.187 ***** 2026-02-05 00:56:47.976180 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.976187 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.976193 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.976199 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.976205 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.976212 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.976218 | orchestrator | 2026-02-05 00:56:47.976224 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 00:56:47.976231 | orchestrator | Thursday 05 February 2026 00:46:59 +0000 (0:00:01.078) 0:01:02.266 ***** 2026-02-05 00:56:47.976237 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.976243 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.976250 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.976256 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.976262 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.976272 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.976278 | orchestrator | 2026-02-05 00:56:47.976285 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 00:56:47.976291 | orchestrator | Thursday 05 February 2026 00:47:01 +0000 (0:00:01.288) 0:01:03.554 ***** 2026-02-05 00:56:47.976298 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.976304 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.976310 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.976317 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.976323 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.976329 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.976336 | orchestrator | 2026-02-05 00:56:47.976342 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 00:56:47.976348 | orchestrator | Thursday 05 February 2026 00:47:02 +0000 (0:00:01.073) 0:01:04.627 ***** 2026-02-05 00:56:47.976355 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.976361 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.976367 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.976374 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.976380 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.976386 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.976392 | orchestrator | 2026-02-05 00:56:47.976399 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 00:56:47.976405 | orchestrator | Thursday 05 February 2026 00:47:02 +0000 (0:00:00.805) 0:01:05.433 ***** 2026-02-05 00:56:47.976411 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.976417 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.976424 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.976434 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.976443 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.976450 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.976456 | orchestrator | 2026-02-05 00:56:47.976463 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 00:56:47.976469 | orchestrator | Thursday 05 February 2026 00:47:04 +0000 (0:00:01.451) 0:01:06.885 ***** 2026-02-05 00:56:47.976476 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.976482 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.976488 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.976495 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.976501 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.976507 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.976513 | orchestrator | 2026-02-05 00:56:47.976518 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 00:56:47.976524 | orchestrator | Thursday 05 February 2026 00:47:05 +0000 (0:00:01.346) 0:01:08.231 ***** 2026-02-05 00:56:47.976529 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.976534 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.976540 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.976547 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.976554 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.976560 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.976566 | orchestrator | 2026-02-05 00:56:47.976572 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 00:56:47.976579 | orchestrator | Thursday 05 February 2026 00:47:06 +0000 (0:00:00.879) 0:01:09.111 ***** 2026-02-05 00:56:47.976585 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.976592 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.976599 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.976606 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.976612 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.976619 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.976625 | orchestrator | 2026-02-05 00:56:47.976632 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 00:56:47.976639 | orchestrator | Thursday 05 February 2026 00:47:07 +0000 (0:00:00.646) 0:01:09.758 ***** 2026-02-05 00:56:47.976645 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.976651 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.976691 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.976698 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.976703 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.976709 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.976714 | orchestrator | 2026-02-05 00:56:47.976720 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 00:56:47.976725 | orchestrator | Thursday 05 February 2026 00:47:07 +0000 (0:00:00.723) 0:01:10.481 ***** 2026-02-05 00:56:47.976730 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.976736 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.976742 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.976747 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.976752 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.976758 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.976764 | orchestrator | 2026-02-05 00:56:47.976769 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 00:56:47.976774 | orchestrator | Thursday 05 February 2026 00:47:08 +0000 (0:00:00.590) 0:01:11.072 ***** 2026-02-05 00:56:47.976779 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.976785 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.976791 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.976798 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.976804 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.976810 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.976816 | orchestrator | 2026-02-05 00:56:47.976821 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 00:56:47.976831 | orchestrator | Thursday 05 February 2026 00:47:09 +0000 (0:00:00.710) 0:01:11.782 ***** 2026-02-05 00:56:47.976837 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.976843 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.976849 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.976855 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.976861 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.976866 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.976873 | orchestrator | 2026-02-05 00:56:47.976880 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 00:56:47.976886 | orchestrator | Thursday 05 February 2026 00:47:09 +0000 (0:00:00.691) 0:01:12.473 ***** 2026-02-05 00:56:47.976891 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.976898 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.976904 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.976911 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.976923 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.976929 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.976935 | orchestrator | 2026-02-05 00:56:47.976941 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 00:56:47.976948 | orchestrator | Thursday 05 February 2026 00:47:10 +0000 (0:00:00.912) 0:01:13.385 ***** 2026-02-05 00:56:47.976955 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.976961 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.976967 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.976973 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.976979 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.976984 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.976990 | orchestrator | 2026-02-05 00:56:47.976996 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 00:56:47.977002 | orchestrator | Thursday 05 February 2026 00:47:11 +0000 (0:00:00.689) 0:01:14.075 ***** 2026-02-05 00:56:47.977008 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.977013 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.977021 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.977029 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.977036 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.977043 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.977050 | orchestrator | 2026-02-05 00:56:47.977057 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 00:56:47.977063 | orchestrator | Thursday 05 February 2026 00:47:12 +0000 (0:00:00.811) 0:01:14.886 ***** 2026-02-05 00:56:47.977069 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.977075 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.977081 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.977088 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.977098 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.977104 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.977111 | orchestrator | 2026-02-05 00:56:47.977117 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-05 00:56:47.977124 | orchestrator | Thursday 05 February 2026 00:47:13 +0000 (0:00:01.398) 0:01:16.285 ***** 2026-02-05 00:56:47.977130 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.977135 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.977140 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.977146 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.977151 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.977157 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.977162 | orchestrator | 2026-02-05 00:56:47.977167 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-05 00:56:47.977173 | orchestrator | Thursday 05 February 2026 00:47:15 +0000 (0:00:01.765) 0:01:18.051 ***** 2026-02-05 00:56:47.977179 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.977185 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.977197 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.977204 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.977210 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.977216 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.977222 | orchestrator | 2026-02-05 00:56:47.977228 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-05 00:56:47.977235 | orchestrator | Thursday 05 February 2026 00:47:18 +0000 (0:00:02.513) 0:01:20.564 ***** 2026-02-05 00:56:47.977241 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.977248 | orchestrator | 2026-02-05 00:56:47.977255 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-05 00:56:47.977262 | orchestrator | Thursday 05 February 2026 00:47:19 +0000 (0:00:01.014) 0:01:21.579 ***** 2026-02-05 00:56:47.977268 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.977275 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.977282 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.977288 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.977295 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.977301 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.977308 | orchestrator | 2026-02-05 00:56:47.977315 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-05 00:56:47.977322 | orchestrator | Thursday 05 February 2026 00:47:19 +0000 (0:00:00.601) 0:01:22.181 ***** 2026-02-05 00:56:47.977328 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.977335 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.977342 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.977348 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.977355 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.977361 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.977368 | orchestrator | 2026-02-05 00:56:47.977374 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-05 00:56:47.977381 | orchestrator | Thursday 05 February 2026 00:47:20 +0000 (0:00:00.664) 0:01:22.845 ***** 2026-02-05 00:56:47.977388 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 00:56:47.977394 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 00:56:47.977400 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 00:56:47.977407 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 00:56:47.977413 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 00:56:47.977420 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-05 00:56:47.977426 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 00:56:47.977433 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 00:56:47.977440 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 00:56:47.977446 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 00:56:47.977459 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 00:56:47.977467 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-05 00:56:47.977473 | orchestrator | 2026-02-05 00:56:47.977480 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-05 00:56:47.977487 | orchestrator | Thursday 05 February 2026 00:47:21 +0000 (0:00:01.338) 0:01:24.183 ***** 2026-02-05 00:56:47.977493 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.977500 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.977507 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.977520 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.977526 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.977533 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.977539 | orchestrator | 2026-02-05 00:56:47.977546 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-05 00:56:47.977552 | orchestrator | Thursday 05 February 2026 00:47:22 +0000 (0:00:01.160) 0:01:25.344 ***** 2026-02-05 00:56:47.977559 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.977565 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.977571 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.977577 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.977583 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.977590 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.977597 | orchestrator | 2026-02-05 00:56:47.977603 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-05 00:56:47.977618 | orchestrator | Thursday 05 February 2026 00:47:23 +0000 (0:00:00.482) 0:01:25.827 ***** 2026-02-05 00:56:47.977624 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.977629 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.977635 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.977641 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.977646 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.977652 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.977671 | orchestrator | 2026-02-05 00:56:47.977677 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-05 00:56:47.977683 | orchestrator | Thursday 05 February 2026 00:47:23 +0000 (0:00:00.626) 0:01:26.453 ***** 2026-02-05 00:56:47.977689 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.977696 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.977702 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.977708 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.977715 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.977722 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.977728 | orchestrator | 2026-02-05 00:56:47.977735 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-05 00:56:47.977742 | orchestrator | Thursday 05 February 2026 00:47:24 +0000 (0:00:00.587) 0:01:27.040 ***** 2026-02-05 00:56:47.977749 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.977756 | orchestrator | 2026-02-05 00:56:47.977762 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-05 00:56:47.977769 | orchestrator | Thursday 05 February 2026 00:47:25 +0000 (0:00:01.045) 0:01:28.086 ***** 2026-02-05 00:56:47.977776 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.977782 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.977789 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.977796 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.977815 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.977822 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.977828 | orchestrator | 2026-02-05 00:56:47.977836 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-05 00:56:47.977842 | orchestrator | Thursday 05 February 2026 00:48:05 +0000 (0:00:40.265) 0:02:08.352 ***** 2026-02-05 00:56:47.977849 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 00:56:47.977855 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 00:56:47.977861 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 00:56:47.977868 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.977874 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 00:56:47.977881 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 00:56:47.977894 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 00:56:47.977901 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.977907 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 00:56:47.977913 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 00:56:47.977920 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 00:56:47.977926 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.977933 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 00:56:47.977940 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 00:56:47.977946 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 00:56:47.977953 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.977960 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 00:56:47.977967 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 00:56:47.977973 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 00:56:47.977980 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.977993 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-05 00:56:47.978000 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-05 00:56:47.978007 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-05 00:56:47.978041 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.978049 | orchestrator | 2026-02-05 00:56:47.978056 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-05 00:56:47.978062 | orchestrator | Thursday 05 February 2026 00:48:06 +0000 (0:00:00.923) 0:02:09.276 ***** 2026-02-05 00:56:47.978069 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.978075 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.978081 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.978087 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.978094 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.978100 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.978106 | orchestrator | 2026-02-05 00:56:47.978113 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-05 00:56:47.978120 | orchestrator | Thursday 05 February 2026 00:48:07 +0000 (0:00:00.532) 0:02:09.808 ***** 2026-02-05 00:56:47.978127 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.978134 | orchestrator | 2026-02-05 00:56:47.978141 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-05 00:56:47.978147 | orchestrator | Thursday 05 February 2026 00:48:07 +0000 (0:00:00.356) 0:02:10.165 ***** 2026-02-05 00:56:47.978154 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.978161 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.978168 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.978179 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.978186 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.978192 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.978200 | orchestrator | 2026-02-05 00:56:47.978207 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-05 00:56:47.978214 | orchestrator | Thursday 05 February 2026 00:48:08 +0000 (0:00:00.613) 0:02:10.778 ***** 2026-02-05 00:56:47.978220 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.978226 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.978233 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.978240 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.978246 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.978253 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.978265 | orchestrator | 2026-02-05 00:56:47.978272 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-05 00:56:47.978279 | orchestrator | Thursday 05 February 2026 00:48:08 +0000 (0:00:00.608) 0:02:11.387 ***** 2026-02-05 00:56:47.978286 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.978293 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.978299 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.978306 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.978312 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.978319 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.978326 | orchestrator | 2026-02-05 00:56:47.978333 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-05 00:56:47.978340 | orchestrator | Thursday 05 February 2026 00:48:09 +0000 (0:00:00.877) 0:02:12.264 ***** 2026-02-05 00:56:47.978346 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.978352 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.978358 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.978365 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.978372 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.978379 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.978386 | orchestrator | 2026-02-05 00:56:47.978393 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-05 00:56:47.978400 | orchestrator | Thursday 05 February 2026 00:48:11 +0000 (0:00:02.194) 0:02:14.458 ***** 2026-02-05 00:56:47.978407 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.978414 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.978420 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.978427 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.978434 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.978441 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.978447 | orchestrator | 2026-02-05 00:56:47.978454 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-05 00:56:47.978461 | orchestrator | Thursday 05 February 2026 00:48:12 +0000 (0:00:01.008) 0:02:15.467 ***** 2026-02-05 00:56:47.978468 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.978476 | orchestrator | 2026-02-05 00:56:47.978483 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-05 00:56:47.978490 | orchestrator | Thursday 05 February 2026 00:48:14 +0000 (0:00:01.167) 0:02:16.634 ***** 2026-02-05 00:56:47.978496 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.978503 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.978510 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.978517 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.978524 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.978530 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.978537 | orchestrator | 2026-02-05 00:56:47.978543 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-05 00:56:47.978550 | orchestrator | Thursday 05 February 2026 00:48:14 +0000 (0:00:00.561) 0:02:17.196 ***** 2026-02-05 00:56:47.978557 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.978564 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.978571 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.978578 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.978584 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.978590 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.978596 | orchestrator | 2026-02-05 00:56:47.978603 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-05 00:56:47.978610 | orchestrator | Thursday 05 February 2026 00:48:15 +0000 (0:00:00.735) 0:02:17.931 ***** 2026-02-05 00:56:47.978617 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.978624 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.978643 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.978690 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.978699 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.978706 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.978713 | orchestrator | 2026-02-05 00:56:47.978720 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-05 00:56:47.978727 | orchestrator | Thursday 05 February 2026 00:48:15 +0000 (0:00:00.536) 0:02:18.468 ***** 2026-02-05 00:56:47.978734 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.978741 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.978748 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.978754 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.978761 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.978767 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.978773 | orchestrator | 2026-02-05 00:56:47.978780 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-05 00:56:47.978787 | orchestrator | Thursday 05 February 2026 00:48:16 +0000 (0:00:00.774) 0:02:19.242 ***** 2026-02-05 00:56:47.978793 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.978800 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.978807 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.978813 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.978820 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.978826 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.978832 | orchestrator | 2026-02-05 00:56:47.978840 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-05 00:56:47.978846 | orchestrator | Thursday 05 February 2026 00:48:17 +0000 (0:00:00.588) 0:02:19.830 ***** 2026-02-05 00:56:47.978853 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.978864 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.978871 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.978878 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.978885 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.978892 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.978899 | orchestrator | 2026-02-05 00:56:47.978906 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-05 00:56:47.978913 | orchestrator | Thursday 05 February 2026 00:48:18 +0000 (0:00:00.832) 0:02:20.663 ***** 2026-02-05 00:56:47.978920 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.978927 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.978934 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.978941 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.978948 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.978956 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.978963 | orchestrator | 2026-02-05 00:56:47.978972 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-05 00:56:47.978980 | orchestrator | Thursday 05 February 2026 00:48:18 +0000 (0:00:00.570) 0:02:21.234 ***** 2026-02-05 00:56:47.978989 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.978996 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.979003 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.979010 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.979017 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.979025 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.979032 | orchestrator | 2026-02-05 00:56:47.979040 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-05 00:56:47.979047 | orchestrator | Thursday 05 February 2026 00:48:19 +0000 (0:00:00.699) 0:02:21.934 ***** 2026-02-05 00:56:47.979054 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.979062 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.979069 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.979075 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.979081 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.979088 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.979103 | orchestrator | 2026-02-05 00:56:47.979110 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-05 00:56:47.979118 | orchestrator | Thursday 05 February 2026 00:48:20 +0000 (0:00:01.095) 0:02:23.030 ***** 2026-02-05 00:56:47.979125 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.979133 | orchestrator | 2026-02-05 00:56:47.979140 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-05 00:56:47.979148 | orchestrator | Thursday 05 February 2026 00:48:21 +0000 (0:00:01.068) 0:02:24.098 ***** 2026-02-05 00:56:47.979155 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-05 00:56:47.979162 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-05 00:56:47.979169 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-05 00:56:47.979176 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-05 00:56:47.979182 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-05 00:56:47.979193 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-05 00:56:47.979200 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-05 00:56:47.979206 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-05 00:56:47.979213 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-05 00:56:47.979220 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-05 00:56:47.979227 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-05 00:56:47.979234 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-05 00:56:47.979240 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-05 00:56:47.979247 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-05 00:56:47.979253 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-05 00:56:47.979260 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-05 00:56:47.979266 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-05 00:56:47.979273 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-05 00:56:47.979286 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-05 00:56:47.979293 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-05 00:56:47.979300 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-05 00:56:47.979306 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-05 00:56:47.979312 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-05 00:56:47.979319 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-05 00:56:47.979326 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-05 00:56:47.979332 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-05 00:56:47.979338 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-05 00:56:47.979345 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-05 00:56:47.979351 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-05 00:56:47.979358 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-05 00:56:47.979365 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-05 00:56:47.979371 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-05 00:56:47.979378 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-05 00:56:47.979384 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-05 00:56:47.979391 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-05 00:56:47.979401 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-05 00:56:47.979407 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-05 00:56:47.979418 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-05 00:56:47.979425 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-05 00:56:47.979431 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-05 00:56:47.979438 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-05 00:56:47.979445 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-05 00:56:47.979452 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-05 00:56:47.979458 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 00:56:47.979465 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-05 00:56:47.979471 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-05 00:56:47.979478 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-05 00:56:47.979485 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-05 00:56:47.979491 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 00:56:47.979498 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 00:56:47.979504 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 00:56:47.979511 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-05 00:56:47.979518 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 00:56:47.979524 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 00:56:47.979531 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 00:56:47.979537 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 00:56:47.979544 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 00:56:47.979550 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 00:56:47.979557 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-05 00:56:47.979563 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 00:56:47.979570 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 00:56:47.979576 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 00:56:47.979583 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 00:56:47.979589 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-05 00:56:47.979596 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 00:56:47.979603 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 00:56:47.979609 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 00:56:47.979616 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 00:56:47.979622 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 00:56:47.979629 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 00:56:47.979635 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 00:56:47.979641 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 00:56:47.979647 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-05 00:56:47.979667 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 00:56:47.979673 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 00:56:47.979680 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-05 00:56:47.979691 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 00:56:47.979706 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 00:56:47.979712 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-05 00:56:47.979718 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 00:56:47.979724 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-05 00:56:47.979730 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-05 00:56:47.979737 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 00:56:47.979742 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 00:56:47.979748 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-05 00:56:47.979754 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-05 00:56:47.979760 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-05 00:56:47.979766 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-05 00:56:47.979772 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 00:56:47.979778 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-05 00:56:47.979784 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-05 00:56:47.979791 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-05 00:56:47.979801 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-05 00:56:47.979807 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-05 00:56:47.979814 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-05 00:56:47.979820 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-05 00:56:47.979826 | orchestrator | 2026-02-05 00:56:47.979833 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-05 00:56:47.979840 | orchestrator | Thursday 05 February 2026 00:48:28 +0000 (0:00:06.770) 0:02:30.869 ***** 2026-02-05 00:56:47.979846 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.979852 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.979862 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.979875 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.979887 | orchestrator | 2026-02-05 00:56:47.979896 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-05 00:56:47.979903 | orchestrator | Thursday 05 February 2026 00:48:29 +0000 (0:00:01.092) 0:02:31.962 ***** 2026-02-05 00:56:47.979910 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 00:56:47.979917 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 00:56:47.979925 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 00:56:47.979932 | orchestrator | 2026-02-05 00:56:47.979942 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-05 00:56:47.979949 | orchestrator | Thursday 05 February 2026 00:48:30 +0000 (0:00:00.839) 0:02:32.802 ***** 2026-02-05 00:56:47.979956 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 00:56:47.979963 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 00:56:47.979971 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 00:56:47.979977 | orchestrator | 2026-02-05 00:56:47.979984 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-05 00:56:47.979996 | orchestrator | Thursday 05 February 2026 00:48:31 +0000 (0:00:01.333) 0:02:34.135 ***** 2026-02-05 00:56:47.980005 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.980013 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.980021 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.980033 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.980045 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.980056 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.980063 | orchestrator | 2026-02-05 00:56:47.980070 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-05 00:56:47.980076 | orchestrator | Thursday 05 February 2026 00:48:32 +0000 (0:00:01.107) 0:02:35.242 ***** 2026-02-05 00:56:47.980082 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.980088 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.980095 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.980102 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.980109 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.980115 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.980122 | orchestrator | 2026-02-05 00:56:47.980129 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-05 00:56:47.980136 | orchestrator | Thursday 05 February 2026 00:48:33 +0000 (0:00:00.819) 0:02:36.062 ***** 2026-02-05 00:56:47.980142 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.980149 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.980156 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.980163 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.980169 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.980176 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.980182 | orchestrator | 2026-02-05 00:56:47.980196 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-05 00:56:47.980203 | orchestrator | Thursday 05 February 2026 00:48:34 +0000 (0:00:01.188) 0:02:37.250 ***** 2026-02-05 00:56:47.980210 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.980217 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.980223 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.980229 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.980235 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.980241 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.980248 | orchestrator | 2026-02-05 00:56:47.980255 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-05 00:56:47.980261 | orchestrator | Thursday 05 February 2026 00:48:35 +0000 (0:00:00.662) 0:02:37.913 ***** 2026-02-05 00:56:47.980268 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.980275 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.980281 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.980288 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.980294 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.980301 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.980308 | orchestrator | 2026-02-05 00:56:47.980315 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-05 00:56:47.980322 | orchestrator | Thursday 05 February 2026 00:48:36 +0000 (0:00:01.036) 0:02:38.950 ***** 2026-02-05 00:56:47.980328 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.980334 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.980340 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.980347 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.980353 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.980364 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.980371 | orchestrator | 2026-02-05 00:56:47.980378 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-05 00:56:47.980385 | orchestrator | Thursday 05 February 2026 00:48:37 +0000 (0:00:00.704) 0:02:39.655 ***** 2026-02-05 00:56:47.980391 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.980404 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.980411 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.980417 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.980424 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.980430 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.980436 | orchestrator | 2026-02-05 00:56:47.980443 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-05 00:56:47.980450 | orchestrator | Thursday 05 February 2026 00:48:37 +0000 (0:00:00.743) 0:02:40.398 ***** 2026-02-05 00:56:47.980456 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.980463 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.980470 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.980476 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.980483 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.980489 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.980496 | orchestrator | 2026-02-05 00:56:47.980503 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-05 00:56:47.980509 | orchestrator | Thursday 05 February 2026 00:48:38 +0000 (0:00:00.692) 0:02:41.091 ***** 2026-02-05 00:56:47.980516 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.980523 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.980530 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.980536 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.980543 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.980550 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.980556 | orchestrator | 2026-02-05 00:56:47.980563 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-05 00:56:47.980570 | orchestrator | Thursday 05 February 2026 00:48:41 +0000 (0:00:02.933) 0:02:44.025 ***** 2026-02-05 00:56:47.980577 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.980584 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.980591 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.980597 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.980604 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.980610 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.980616 | orchestrator | 2026-02-05 00:56:47.980623 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-05 00:56:47.980629 | orchestrator | Thursday 05 February 2026 00:48:42 +0000 (0:00:00.544) 0:02:44.570 ***** 2026-02-05 00:56:47.980636 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.980642 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.980649 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.980668 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.980675 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.980682 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.980688 | orchestrator | 2026-02-05 00:56:47.980695 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-05 00:56:47.980702 | orchestrator | Thursday 05 February 2026 00:48:42 +0000 (0:00:00.755) 0:02:45.326 ***** 2026-02-05 00:56:47.980709 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.980715 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.980722 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.980729 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.980736 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.980742 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.980749 | orchestrator | 2026-02-05 00:56:47.980756 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-05 00:56:47.980762 | orchestrator | Thursday 05 February 2026 00:48:43 +0000 (0:00:00.466) 0:02:45.792 ***** 2026-02-05 00:56:47.980769 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 00:56:47.980776 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 00:56:47.980788 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 00:56:47.980795 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.980807 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.980814 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.980821 | orchestrator | 2026-02-05 00:56:47.980828 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-05 00:56:47.980834 | orchestrator | Thursday 05 February 2026 00:48:44 +0000 (0:00:00.803) 0:02:46.595 ***** 2026-02-05 00:56:47.980842 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-05 00:56:47.980851 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-05 00:56:47.980858 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-05 00:56:47.980865 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-05 00:56:47.980872 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.980879 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-05 00:56:47.980886 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-05 00:56:47.980893 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.980900 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.980906 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.980913 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.980920 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.980926 | orchestrator | 2026-02-05 00:56:47.980933 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-05 00:56:47.980940 | orchestrator | Thursday 05 February 2026 00:48:44 +0000 (0:00:00.603) 0:02:47.198 ***** 2026-02-05 00:56:47.980947 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.980953 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.980961 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.980967 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.980974 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.980980 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.980987 | orchestrator | 2026-02-05 00:56:47.980993 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-05 00:56:47.981000 | orchestrator | Thursday 05 February 2026 00:48:45 +0000 (0:00:00.800) 0:02:47.999 ***** 2026-02-05 00:56:47.981015 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.981022 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.981028 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.981035 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.981041 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.981048 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.981054 | orchestrator | 2026-02-05 00:56:47.981061 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 00:56:47.981067 | orchestrator | Thursday 05 February 2026 00:48:46 +0000 (0:00:00.532) 0:02:48.532 ***** 2026-02-05 00:56:47.981073 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.981079 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.981086 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.981092 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.981099 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.981105 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.981112 | orchestrator | 2026-02-05 00:56:47.981119 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 00:56:47.981126 | orchestrator | Thursday 05 February 2026 00:48:47 +0000 (0:00:00.969) 0:02:49.502 ***** 2026-02-05 00:56:47.981132 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.981139 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.981146 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.981152 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.981159 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.981166 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.981172 | orchestrator | 2026-02-05 00:56:47.981179 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 00:56:47.981191 | orchestrator | Thursday 05 February 2026 00:48:47 +0000 (0:00:00.881) 0:02:50.383 ***** 2026-02-05 00:56:47.981198 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.981204 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.981210 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.981217 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.981224 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.981230 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.981237 | orchestrator | 2026-02-05 00:56:47.981243 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 00:56:47.981250 | orchestrator | Thursday 05 February 2026 00:48:48 +0000 (0:00:00.962) 0:02:51.346 ***** 2026-02-05 00:56:47.981256 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.981263 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.981269 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.981276 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.981282 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.981289 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.981296 | orchestrator | 2026-02-05 00:56:47.981302 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 00:56:47.981309 | orchestrator | Thursday 05 February 2026 00:48:49 +0000 (0:00:00.706) 0:02:52.052 ***** 2026-02-05 00:56:47.981316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:56:47.981322 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:56:47.981349 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:56:47.981357 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.981364 | orchestrator | 2026-02-05 00:56:47.981373 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 00:56:47.981380 | orchestrator | Thursday 05 February 2026 00:48:49 +0000 (0:00:00.366) 0:02:52.418 ***** 2026-02-05 00:56:47.981387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:56:47.981395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:56:47.981403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:56:47.981416 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.981423 | orchestrator | 2026-02-05 00:56:47.981430 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 00:56:47.981436 | orchestrator | Thursday 05 February 2026 00:48:50 +0000 (0:00:00.398) 0:02:52.817 ***** 2026-02-05 00:56:47.981443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:56:47.981449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:56:47.981455 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:56:47.981462 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.981468 | orchestrator | 2026-02-05 00:56:47.981475 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 00:56:47.981481 | orchestrator | Thursday 05 February 2026 00:48:50 +0000 (0:00:00.555) 0:02:53.373 ***** 2026-02-05 00:56:47.981487 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.981494 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.981500 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.981507 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.981513 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.981520 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.981526 | orchestrator | 2026-02-05 00:56:47.981533 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 00:56:47.981540 | orchestrator | Thursday 05 February 2026 00:48:51 +0000 (0:00:01.080) 0:02:54.453 ***** 2026-02-05 00:56:47.981547 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 00:56:47.981553 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-05 00:56:47.981560 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-05 00:56:47.981567 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.981574 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-05 00:56:47.981581 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-05 00:56:47.981588 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-05 00:56:47.981594 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.981601 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.981608 | orchestrator | 2026-02-05 00:56:47.981615 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-05 00:56:47.981621 | orchestrator | Thursday 05 February 2026 00:48:54 +0000 (0:00:02.056) 0:02:56.509 ***** 2026-02-05 00:56:47.981628 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.981635 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.981642 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.981648 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.981668 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.981675 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.981682 | orchestrator | 2026-02-05 00:56:47.981688 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-05 00:56:47.981695 | orchestrator | Thursday 05 February 2026 00:48:56 +0000 (0:00:02.594) 0:02:59.104 ***** 2026-02-05 00:56:47.981701 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.981708 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.981714 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.981721 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.981727 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.981734 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.981740 | orchestrator | 2026-02-05 00:56:47.981747 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-05 00:56:47.981754 | orchestrator | Thursday 05 February 2026 00:48:57 +0000 (0:00:01.127) 0:03:00.231 ***** 2026-02-05 00:56:47.981761 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.981768 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.981774 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.981781 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.981794 | orchestrator | 2026-02-05 00:56:47.981801 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-05 00:56:47.981814 | orchestrator | Thursday 05 February 2026 00:48:58 +0000 (0:00:00.961) 0:03:01.193 ***** 2026-02-05 00:56:47.981821 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.981827 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.981836 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.981843 | orchestrator | 2026-02-05 00:56:47.981851 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-05 00:56:47.981857 | orchestrator | Thursday 05 February 2026 00:48:59 +0000 (0:00:00.363) 0:03:01.556 ***** 2026-02-05 00:56:47.981864 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.981870 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.981877 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.981883 | orchestrator | 2026-02-05 00:56:47.981889 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-05 00:56:47.981896 | orchestrator | Thursday 05 February 2026 00:49:00 +0000 (0:00:01.503) 0:03:03.060 ***** 2026-02-05 00:56:47.981902 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 00:56:47.981908 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 00:56:47.981914 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 00:56:47.981921 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.981927 | orchestrator | 2026-02-05 00:56:47.981934 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-05 00:56:47.981940 | orchestrator | Thursday 05 February 2026 00:49:01 +0000 (0:00:00.546) 0:03:03.606 ***** 2026-02-05 00:56:47.981946 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.981953 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.981959 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.981966 | orchestrator | 2026-02-05 00:56:47.981972 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-05 00:56:47.981984 | orchestrator | Thursday 05 February 2026 00:49:01 +0000 (0:00:00.524) 0:03:04.131 ***** 2026-02-05 00:56:47.981990 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.981997 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.982003 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.982010 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.982055 | orchestrator | 2026-02-05 00:56:47.982063 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-05 00:56:47.982069 | orchestrator | Thursday 05 February 2026 00:49:02 +0000 (0:00:01.007) 0:03:05.138 ***** 2026-02-05 00:56:47.982075 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:56:47.982081 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:56:47.982087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:56:47.982094 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.982100 | orchestrator | 2026-02-05 00:56:47.982107 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-05 00:56:47.982116 | orchestrator | Thursday 05 February 2026 00:49:03 +0000 (0:00:00.346) 0:03:05.485 ***** 2026-02-05 00:56:47.982123 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.982130 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.982136 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.982142 | orchestrator | 2026-02-05 00:56:47.982149 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-05 00:56:47.982155 | orchestrator | Thursday 05 February 2026 00:49:03 +0000 (0:00:00.313) 0:03:05.798 ***** 2026-02-05 00:56:47.982161 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.982168 | orchestrator | 2026-02-05 00:56:47.982174 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-05 00:56:47.982187 | orchestrator | Thursday 05 February 2026 00:49:03 +0000 (0:00:00.180) 0:03:05.979 ***** 2026-02-05 00:56:47.982193 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.982200 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.982206 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.982212 | orchestrator | 2026-02-05 00:56:47.982218 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-05 00:56:47.982225 | orchestrator | Thursday 05 February 2026 00:49:03 +0000 (0:00:00.442) 0:03:06.422 ***** 2026-02-05 00:56:47.982232 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.982239 | orchestrator | 2026-02-05 00:56:47.982245 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-05 00:56:47.982252 | orchestrator | Thursday 05 February 2026 00:49:04 +0000 (0:00:00.186) 0:03:06.608 ***** 2026-02-05 00:56:47.982258 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.982265 | orchestrator | 2026-02-05 00:56:47.982271 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-05 00:56:47.982278 | orchestrator | Thursday 05 February 2026 00:49:04 +0000 (0:00:00.211) 0:03:06.819 ***** 2026-02-05 00:56:47.982285 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.982291 | orchestrator | 2026-02-05 00:56:47.982298 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-05 00:56:47.982305 | orchestrator | Thursday 05 February 2026 00:49:04 +0000 (0:00:00.108) 0:03:06.928 ***** 2026-02-05 00:56:47.982312 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.982318 | orchestrator | 2026-02-05 00:56:47.982325 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-05 00:56:47.982331 | orchestrator | Thursday 05 February 2026 00:49:04 +0000 (0:00:00.218) 0:03:07.146 ***** 2026-02-05 00:56:47.982337 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.982344 | orchestrator | 2026-02-05 00:56:47.982351 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-05 00:56:47.982357 | orchestrator | Thursday 05 February 2026 00:49:04 +0000 (0:00:00.202) 0:03:07.349 ***** 2026-02-05 00:56:47.982364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:56:47.982370 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:56:47.982377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:56:47.982383 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.982390 | orchestrator | 2026-02-05 00:56:47.982396 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-05 00:56:47.982415 | orchestrator | Thursday 05 February 2026 00:49:05 +0000 (0:00:00.437) 0:03:07.786 ***** 2026-02-05 00:56:47.982422 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.982429 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.982436 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.982443 | orchestrator | 2026-02-05 00:56:47.982450 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-05 00:56:47.982456 | orchestrator | Thursday 05 February 2026 00:49:05 +0000 (0:00:00.385) 0:03:08.172 ***** 2026-02-05 00:56:47.982463 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.982470 | orchestrator | 2026-02-05 00:56:47.982476 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-05 00:56:47.982483 | orchestrator | Thursday 05 February 2026 00:49:06 +0000 (0:00:00.735) 0:03:08.908 ***** 2026-02-05 00:56:47.982490 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.982497 | orchestrator | 2026-02-05 00:56:47.982504 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-05 00:56:47.982510 | orchestrator | Thursday 05 February 2026 00:49:06 +0000 (0:00:00.212) 0:03:09.120 ***** 2026-02-05 00:56:47.982517 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.982523 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.982530 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.982537 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.982550 | orchestrator | 2026-02-05 00:56:47.982557 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-05 00:56:47.982567 | orchestrator | Thursday 05 February 2026 00:49:07 +0000 (0:00:01.004) 0:03:10.125 ***** 2026-02-05 00:56:47.982574 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.982581 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.982587 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.982593 | orchestrator | 2026-02-05 00:56:47.982600 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-05 00:56:47.982607 | orchestrator | Thursday 05 February 2026 00:49:08 +0000 (0:00:00.491) 0:03:10.616 ***** 2026-02-05 00:56:47.982613 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.982620 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.982627 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.982634 | orchestrator | 2026-02-05 00:56:47.982640 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-05 00:56:47.982647 | orchestrator | Thursday 05 February 2026 00:49:09 +0000 (0:00:01.354) 0:03:11.971 ***** 2026-02-05 00:56:47.982686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:56:47.982695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:56:47.982701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:56:47.982708 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.982715 | orchestrator | 2026-02-05 00:56:47.982722 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-05 00:56:47.982729 | orchestrator | Thursday 05 February 2026 00:49:10 +0000 (0:00:00.625) 0:03:12.596 ***** 2026-02-05 00:56:47.982736 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.982743 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.982750 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.982756 | orchestrator | 2026-02-05 00:56:47.982763 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-05 00:56:47.982772 | orchestrator | Thursday 05 February 2026 00:49:10 +0000 (0:00:00.357) 0:03:12.953 ***** 2026-02-05 00:56:47.982785 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.982792 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.982799 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.982807 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.982814 | orchestrator | 2026-02-05 00:56:47.982822 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-05 00:56:47.982829 | orchestrator | Thursday 05 February 2026 00:49:11 +0000 (0:00:00.926) 0:03:13.879 ***** 2026-02-05 00:56:47.982836 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.982844 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.982851 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.982857 | orchestrator | 2026-02-05 00:56:47.982864 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-05 00:56:47.982870 | orchestrator | Thursday 05 February 2026 00:49:11 +0000 (0:00:00.320) 0:03:14.200 ***** 2026-02-05 00:56:47.982877 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.982884 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.982890 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.982897 | orchestrator | 2026-02-05 00:56:47.982903 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-05 00:56:47.982910 | orchestrator | Thursday 05 February 2026 00:49:13 +0000 (0:00:01.307) 0:03:15.507 ***** 2026-02-05 00:56:47.982917 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:56:47.982923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:56:47.982930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:56:47.982937 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.982943 | orchestrator | 2026-02-05 00:56:47.982956 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-05 00:56:47.982963 | orchestrator | Thursday 05 February 2026 00:49:13 +0000 (0:00:00.848) 0:03:16.356 ***** 2026-02-05 00:56:47.982970 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.982976 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.982983 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.982989 | orchestrator | 2026-02-05 00:56:47.982996 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-05 00:56:47.983003 | orchestrator | Thursday 05 February 2026 00:49:14 +0000 (0:00:00.319) 0:03:16.675 ***** 2026-02-05 00:56:47.983010 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.983016 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.983023 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.983030 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.983036 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.983049 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.983056 | orchestrator | 2026-02-05 00:56:47.983063 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-05 00:56:47.983069 | orchestrator | Thursday 05 February 2026 00:49:14 +0000 (0:00:00.521) 0:03:17.197 ***** 2026-02-05 00:56:47.983076 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.983082 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.983088 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.983094 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.983101 | orchestrator | 2026-02-05 00:56:47.983108 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-05 00:56:47.983115 | orchestrator | Thursday 05 February 2026 00:49:15 +0000 (0:00:01.038) 0:03:18.235 ***** 2026-02-05 00:56:47.983122 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.983129 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.983136 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.983143 | orchestrator | 2026-02-05 00:56:47.983150 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-05 00:56:47.983157 | orchestrator | Thursday 05 February 2026 00:49:16 +0000 (0:00:00.313) 0:03:18.548 ***** 2026-02-05 00:56:47.983164 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.983171 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.983178 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.983184 | orchestrator | 2026-02-05 00:56:47.983196 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-05 00:56:47.983211 | orchestrator | Thursday 05 February 2026 00:49:17 +0000 (0:00:01.228) 0:03:19.777 ***** 2026-02-05 00:56:47.983218 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 00:56:47.983226 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 00:56:47.983233 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 00:56:47.983241 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.983247 | orchestrator | 2026-02-05 00:56:47.983254 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-05 00:56:47.983261 | orchestrator | Thursday 05 February 2026 00:49:18 +0000 (0:00:00.818) 0:03:20.595 ***** 2026-02-05 00:56:47.983267 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.983274 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.983281 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.983287 | orchestrator | 2026-02-05 00:56:47.983294 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-05 00:56:47.983300 | orchestrator | 2026-02-05 00:56:47.983307 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 00:56:47.983314 | orchestrator | Thursday 05 February 2026 00:49:18 +0000 (0:00:00.482) 0:03:21.078 ***** 2026-02-05 00:56:47.983320 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.983333 | orchestrator | 2026-02-05 00:56:47.983340 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 00:56:47.983346 | orchestrator | Thursday 05 February 2026 00:49:19 +0000 (0:00:00.452) 0:03:21.530 ***** 2026-02-05 00:56:47.983352 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.983358 | orchestrator | 2026-02-05 00:56:47.983365 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 00:56:47.983373 | orchestrator | Thursday 05 February 2026 00:49:19 +0000 (0:00:00.624) 0:03:22.155 ***** 2026-02-05 00:56:47.983380 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.983386 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.983393 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.983399 | orchestrator | 2026-02-05 00:56:47.983406 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 00:56:47.983412 | orchestrator | Thursday 05 February 2026 00:49:20 +0000 (0:00:00.664) 0:03:22.819 ***** 2026-02-05 00:56:47.983419 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.983425 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.983432 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.983439 | orchestrator | 2026-02-05 00:56:47.983445 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 00:56:47.983453 | orchestrator | Thursday 05 February 2026 00:49:20 +0000 (0:00:00.264) 0:03:23.083 ***** 2026-02-05 00:56:47.983459 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.983466 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.983472 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.983479 | orchestrator | 2026-02-05 00:56:47.983485 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 00:56:47.983491 | orchestrator | Thursday 05 February 2026 00:49:21 +0000 (0:00:00.404) 0:03:23.488 ***** 2026-02-05 00:56:47.983497 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.983504 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.983510 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.983517 | orchestrator | 2026-02-05 00:56:47.983525 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 00:56:47.983532 | orchestrator | Thursday 05 February 2026 00:49:21 +0000 (0:00:00.300) 0:03:23.788 ***** 2026-02-05 00:56:47.983540 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.983546 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.983553 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.983559 | orchestrator | 2026-02-05 00:56:47.983565 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 00:56:47.983571 | orchestrator | Thursday 05 February 2026 00:49:21 +0000 (0:00:00.630) 0:03:24.419 ***** 2026-02-05 00:56:47.983578 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.983584 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.983590 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.983597 | orchestrator | 2026-02-05 00:56:47.983604 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 00:56:47.983610 | orchestrator | Thursday 05 February 2026 00:49:22 +0000 (0:00:00.303) 0:03:24.723 ***** 2026-02-05 00:56:47.983623 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.983631 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.983638 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.983644 | orchestrator | 2026-02-05 00:56:47.983651 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 00:56:47.983673 | orchestrator | Thursday 05 February 2026 00:49:22 +0000 (0:00:00.432) 0:03:25.155 ***** 2026-02-05 00:56:47.983679 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.983686 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.983692 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.983699 | orchestrator | 2026-02-05 00:56:47.983706 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 00:56:47.983718 | orchestrator | Thursday 05 February 2026 00:49:23 +0000 (0:00:00.697) 0:03:25.853 ***** 2026-02-05 00:56:47.983726 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.983732 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.983740 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.983747 | orchestrator | 2026-02-05 00:56:47.983754 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 00:56:47.983760 | orchestrator | Thursday 05 February 2026 00:49:24 +0000 (0:00:00.692) 0:03:26.545 ***** 2026-02-05 00:56:47.983767 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.983773 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.983780 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.983785 | orchestrator | 2026-02-05 00:56:47.983792 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 00:56:47.983798 | orchestrator | Thursday 05 February 2026 00:49:24 +0000 (0:00:00.281) 0:03:26.827 ***** 2026-02-05 00:56:47.983804 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.983811 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.983822 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.983828 | orchestrator | 2026-02-05 00:56:47.983835 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 00:56:47.983842 | orchestrator | Thursday 05 February 2026 00:49:24 +0000 (0:00:00.438) 0:03:27.265 ***** 2026-02-05 00:56:47.983848 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.983855 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.983862 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.983868 | orchestrator | 2026-02-05 00:56:47.983875 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 00:56:47.983882 | orchestrator | Thursday 05 February 2026 00:49:25 +0000 (0:00:00.452) 0:03:27.718 ***** 2026-02-05 00:56:47.983889 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.983895 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.983902 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.983909 | orchestrator | 2026-02-05 00:56:47.983915 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 00:56:47.983922 | orchestrator | Thursday 05 February 2026 00:49:25 +0000 (0:00:00.326) 0:03:28.045 ***** 2026-02-05 00:56:47.983929 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.983936 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.983942 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.983949 | orchestrator | 2026-02-05 00:56:47.983956 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 00:56:47.983962 | orchestrator | Thursday 05 February 2026 00:49:25 +0000 (0:00:00.260) 0:03:28.305 ***** 2026-02-05 00:56:47.983969 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.983975 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.983982 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.983988 | orchestrator | 2026-02-05 00:56:47.983995 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 00:56:47.984001 | orchestrator | Thursday 05 February 2026 00:49:26 +0000 (0:00:00.794) 0:03:29.100 ***** 2026-02-05 00:56:47.984008 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.984014 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.984021 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.984027 | orchestrator | 2026-02-05 00:56:47.984034 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 00:56:47.984041 | orchestrator | Thursday 05 February 2026 00:49:27 +0000 (0:00:00.495) 0:03:29.596 ***** 2026-02-05 00:56:47.984047 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.984054 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.984061 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.984067 | orchestrator | 2026-02-05 00:56:47.984073 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 00:56:47.984079 | orchestrator | Thursday 05 February 2026 00:49:27 +0000 (0:00:00.323) 0:03:29.919 ***** 2026-02-05 00:56:47.984090 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.984097 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.984103 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.984110 | orchestrator | 2026-02-05 00:56:47.984117 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 00:56:47.984124 | orchestrator | Thursday 05 February 2026 00:49:27 +0000 (0:00:00.299) 0:03:30.219 ***** 2026-02-05 00:56:47.984130 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.984137 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.984144 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.984150 | orchestrator | 2026-02-05 00:56:47.984157 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-05 00:56:47.984163 | orchestrator | Thursday 05 February 2026 00:49:28 +0000 (0:00:00.649) 0:03:30.868 ***** 2026-02-05 00:56:47.984170 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.984176 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.984183 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.984190 | orchestrator | 2026-02-05 00:56:47.984198 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-05 00:56:47.984209 | orchestrator | Thursday 05 February 2026 00:49:28 +0000 (0:00:00.346) 0:03:31.214 ***** 2026-02-05 00:56:47.984220 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.984231 | orchestrator | 2026-02-05 00:56:47.984241 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-05 00:56:47.984248 | orchestrator | Thursday 05 February 2026 00:49:29 +0000 (0:00:00.476) 0:03:31.691 ***** 2026-02-05 00:56:47.984255 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.984262 | orchestrator | 2026-02-05 00:56:47.984274 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-05 00:56:47.984284 | orchestrator | Thursday 05 February 2026 00:49:29 +0000 (0:00:00.120) 0:03:31.811 ***** 2026-02-05 00:56:47.984295 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-05 00:56:47.984308 | orchestrator | 2026-02-05 00:56:47.984320 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-05 00:56:47.984327 | orchestrator | Thursday 05 February 2026 00:49:30 +0000 (0:00:00.934) 0:03:32.746 ***** 2026-02-05 00:56:47.984335 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.984347 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.984359 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.984371 | orchestrator | 2026-02-05 00:56:47.984382 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-05 00:56:47.984393 | orchestrator | Thursday 05 February 2026 00:49:30 +0000 (0:00:00.467) 0:03:33.213 ***** 2026-02-05 00:56:47.984405 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.984414 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.984423 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.984434 | orchestrator | 2026-02-05 00:56:47.984446 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-05 00:56:47.984457 | orchestrator | Thursday 05 February 2026 00:49:31 +0000 (0:00:00.284) 0:03:33.498 ***** 2026-02-05 00:56:47.984464 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.984471 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.984478 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.984487 | orchestrator | 2026-02-05 00:56:47.984496 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-05 00:56:47.984510 | orchestrator | Thursday 05 February 2026 00:49:32 +0000 (0:00:01.613) 0:03:35.112 ***** 2026-02-05 00:56:47.984517 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.984524 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.984531 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.984537 | orchestrator | 2026-02-05 00:56:47.984545 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-05 00:56:47.984554 | orchestrator | Thursday 05 February 2026 00:49:33 +0000 (0:00:00.803) 0:03:35.915 ***** 2026-02-05 00:56:47.984568 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.984574 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.984581 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.984587 | orchestrator | 2026-02-05 00:56:47.984594 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-05 00:56:47.984601 | orchestrator | Thursday 05 February 2026 00:49:34 +0000 (0:00:00.876) 0:03:36.792 ***** 2026-02-05 00:56:47.984607 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.984614 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.984621 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.984627 | orchestrator | 2026-02-05 00:56:47.984634 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-05 00:56:47.984641 | orchestrator | Thursday 05 February 2026 00:49:35 +0000 (0:00:00.968) 0:03:37.761 ***** 2026-02-05 00:56:47.984648 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.984687 | orchestrator | 2026-02-05 00:56:47.984696 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-05 00:56:47.984704 | orchestrator | Thursday 05 February 2026 00:49:36 +0000 (0:00:01.482) 0:03:39.244 ***** 2026-02-05 00:56:47.984711 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.984717 | orchestrator | 2026-02-05 00:56:47.984724 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-05 00:56:47.984730 | orchestrator | Thursday 05 February 2026 00:49:37 +0000 (0:00:00.652) 0:03:39.896 ***** 2026-02-05 00:56:47.984737 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 00:56:47.984743 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:56:47.984750 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:56:47.984756 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 00:56:47.984762 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 00:56:47.984769 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-05 00:56:47.984775 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 00:56:47.984782 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-05 00:56:47.984788 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-05 00:56:47.984795 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-05 00:56:47.984801 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 00:56:47.984808 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-05 00:56:47.984814 | orchestrator | 2026-02-05 00:56:47.984821 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-05 00:56:47.984827 | orchestrator | Thursday 05 February 2026 00:49:40 +0000 (0:00:03.384) 0:03:43.281 ***** 2026-02-05 00:56:47.984833 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.984839 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.984844 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.984849 | orchestrator | 2026-02-05 00:56:47.984855 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-05 00:56:47.984860 | orchestrator | Thursday 05 February 2026 00:49:42 +0000 (0:00:01.379) 0:03:44.660 ***** 2026-02-05 00:56:47.984866 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.984872 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.984877 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.984883 | orchestrator | 2026-02-05 00:56:47.984888 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-05 00:56:47.984894 | orchestrator | Thursday 05 February 2026 00:49:42 +0000 (0:00:00.281) 0:03:44.942 ***** 2026-02-05 00:56:47.984899 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.984905 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.984911 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.984916 | orchestrator | 2026-02-05 00:56:47.984922 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-05 00:56:47.984934 | orchestrator | Thursday 05 February 2026 00:49:42 +0000 (0:00:00.270) 0:03:45.212 ***** 2026-02-05 00:56:47.984940 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.984952 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.984958 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.984963 | orchestrator | 2026-02-05 00:56:47.984969 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-05 00:56:47.984975 | orchestrator | Thursday 05 February 2026 00:49:44 +0000 (0:00:01.514) 0:03:46.726 ***** 2026-02-05 00:56:47.984980 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.984985 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.984991 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.984997 | orchestrator | 2026-02-05 00:56:47.985003 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-05 00:56:47.985009 | orchestrator | Thursday 05 February 2026 00:49:45 +0000 (0:00:01.364) 0:03:48.091 ***** 2026-02-05 00:56:47.985015 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.985020 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.985026 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.985032 | orchestrator | 2026-02-05 00:56:47.985037 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-05 00:56:47.985043 | orchestrator | Thursday 05 February 2026 00:49:46 +0000 (0:00:00.443) 0:03:48.534 ***** 2026-02-05 00:56:47.985049 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.985055 | orchestrator | 2026-02-05 00:56:47.985060 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-05 00:56:47.985067 | orchestrator | Thursday 05 February 2026 00:49:46 +0000 (0:00:00.463) 0:03:48.997 ***** 2026-02-05 00:56:47.985073 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.985083 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.985089 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.985094 | orchestrator | 2026-02-05 00:56:47.985100 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-05 00:56:47.985105 | orchestrator | Thursday 05 February 2026 00:49:46 +0000 (0:00:00.259) 0:03:49.257 ***** 2026-02-05 00:56:47.985111 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.985117 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.985125 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.985131 | orchestrator | 2026-02-05 00:56:47.985136 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-05 00:56:47.985141 | orchestrator | Thursday 05 February 2026 00:49:47 +0000 (0:00:00.530) 0:03:49.788 ***** 2026-02-05 00:56:47.985146 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.985152 | orchestrator | 2026-02-05 00:56:47.985158 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-05 00:56:47.985164 | orchestrator | Thursday 05 February 2026 00:49:47 +0000 (0:00:00.471) 0:03:50.259 ***** 2026-02-05 00:56:47.985170 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.985176 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.985182 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.985188 | orchestrator | 2026-02-05 00:56:47.985195 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-05 00:56:47.985202 | orchestrator | Thursday 05 February 2026 00:49:49 +0000 (0:00:02.000) 0:03:52.259 ***** 2026-02-05 00:56:47.985208 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.985214 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.985220 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.985227 | orchestrator | 2026-02-05 00:56:47.985234 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-05 00:56:47.985240 | orchestrator | Thursday 05 February 2026 00:49:51 +0000 (0:00:01.321) 0:03:53.581 ***** 2026-02-05 00:56:47.985251 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.985258 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.985264 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.985270 | orchestrator | 2026-02-05 00:56:47.985276 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-05 00:56:47.985282 | orchestrator | Thursday 05 February 2026 00:49:53 +0000 (0:00:02.095) 0:03:55.677 ***** 2026-02-05 00:56:47.985288 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.985294 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.985300 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.985307 | orchestrator | 2026-02-05 00:56:47.985314 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-05 00:56:47.985320 | orchestrator | Thursday 05 February 2026 00:49:55 +0000 (0:00:02.394) 0:03:58.072 ***** 2026-02-05 00:56:47.985327 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.985334 | orchestrator | 2026-02-05 00:56:47.985340 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-05 00:56:47.985347 | orchestrator | Thursday 05 February 2026 00:49:56 +0000 (0:00:00.697) 0:03:58.769 ***** 2026-02-05 00:56:47.985354 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-05 00:56:47.985361 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.985367 | orchestrator | 2026-02-05 00:56:47.985374 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-05 00:56:47.985380 | orchestrator | Thursday 05 February 2026 00:50:17 +0000 (0:00:21.700) 0:04:20.470 ***** 2026-02-05 00:56:47.985387 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.985394 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.985400 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.985406 | orchestrator | 2026-02-05 00:56:47.985413 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-05 00:56:47.985419 | orchestrator | Thursday 05 February 2026 00:50:27 +0000 (0:00:09.771) 0:04:30.241 ***** 2026-02-05 00:56:47.985426 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.985433 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.985440 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.985446 | orchestrator | 2026-02-05 00:56:47.985453 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-05 00:56:47.985466 | orchestrator | Thursday 05 February 2026 00:50:28 +0000 (0:00:00.265) 0:04:30.507 ***** 2026-02-05 00:56:47.985474 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__10441c028cf66aa604f35842ef8ecd317e595168'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-05 00:56:47.985481 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__10441c028cf66aa604f35842ef8ecd317e595168'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-05 00:56:47.985491 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__10441c028cf66aa604f35842ef8ecd317e595168'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-05 00:56:47.985498 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__10441c028cf66aa604f35842ef8ecd317e595168'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-05 00:56:47.985509 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__10441c028cf66aa604f35842ef8ecd317e595168'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-05 00:56:47.985516 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__10441c028cf66aa604f35842ef8ecd317e595168'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__10441c028cf66aa604f35842ef8ecd317e595168'}])  2026-02-05 00:56:47.985524 | orchestrator | 2026-02-05 00:56:47.985530 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-05 00:56:47.985537 | orchestrator | Thursday 05 February 2026 00:50:42 +0000 (0:00:14.656) 0:04:45.163 ***** 2026-02-05 00:56:47.985544 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.985550 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.985556 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.985562 | orchestrator | 2026-02-05 00:56:47.985567 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-05 00:56:47.985573 | orchestrator | Thursday 05 February 2026 00:50:43 +0000 (0:00:00.338) 0:04:45.502 ***** 2026-02-05 00:56:47.985578 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.985585 | orchestrator | 2026-02-05 00:56:47.985591 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-05 00:56:47.985597 | orchestrator | Thursday 05 February 2026 00:50:43 +0000 (0:00:00.533) 0:04:46.035 ***** 2026-02-05 00:56:47.985602 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.985608 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.985613 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.985620 | orchestrator | 2026-02-05 00:56:47.985626 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-05 00:56:47.985632 | orchestrator | Thursday 05 February 2026 00:50:44 +0000 (0:00:00.563) 0:04:46.598 ***** 2026-02-05 00:56:47.985638 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.985643 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.985649 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.985672 | orchestrator | 2026-02-05 00:56:47.985679 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-05 00:56:47.985685 | orchestrator | Thursday 05 February 2026 00:50:44 +0000 (0:00:00.454) 0:04:47.052 ***** 2026-02-05 00:56:47.985691 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 00:56:47.985697 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 00:56:47.985703 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 00:56:47.985710 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.985717 | orchestrator | 2026-02-05 00:56:47.985724 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-05 00:56:47.985731 | orchestrator | Thursday 05 February 2026 00:50:45 +0000 (0:00:00.681) 0:04:47.734 ***** 2026-02-05 00:56:47.985737 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.985743 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.985760 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.985767 | orchestrator | 2026-02-05 00:56:47.985773 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-05 00:56:47.985779 | orchestrator | 2026-02-05 00:56:47.985786 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 00:56:47.985862 | orchestrator | Thursday 05 February 2026 00:50:45 +0000 (0:00:00.557) 0:04:48.291 ***** 2026-02-05 00:56:47.985874 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.985882 | orchestrator | 2026-02-05 00:56:47.985890 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 00:56:47.985897 | orchestrator | Thursday 05 February 2026 00:50:46 +0000 (0:00:00.732) 0:04:49.024 ***** 2026-02-05 00:56:47.985908 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.985919 | orchestrator | 2026-02-05 00:56:47.985931 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 00:56:47.985943 | orchestrator | Thursday 05 February 2026 00:50:47 +0000 (0:00:00.528) 0:04:49.553 ***** 2026-02-05 00:56:47.985954 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.985966 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.985979 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.985991 | orchestrator | 2026-02-05 00:56:47.986004 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 00:56:47.986077 | orchestrator | Thursday 05 February 2026 00:50:48 +0000 (0:00:00.967) 0:04:50.520 ***** 2026-02-05 00:56:47.986117 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.986124 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.986131 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.986137 | orchestrator | 2026-02-05 00:56:47.986144 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 00:56:47.986150 | orchestrator | Thursday 05 February 2026 00:50:48 +0000 (0:00:00.327) 0:04:50.848 ***** 2026-02-05 00:56:47.986179 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.986186 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.986192 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.986199 | orchestrator | 2026-02-05 00:56:47.986205 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 00:56:47.986211 | orchestrator | Thursday 05 February 2026 00:50:48 +0000 (0:00:00.306) 0:04:51.155 ***** 2026-02-05 00:56:47.986218 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.986224 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.986231 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.986238 | orchestrator | 2026-02-05 00:56:47.986243 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 00:56:47.986249 | orchestrator | Thursday 05 February 2026 00:50:48 +0000 (0:00:00.282) 0:04:51.437 ***** 2026-02-05 00:56:47.986256 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.986262 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.986267 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.986273 | orchestrator | 2026-02-05 00:56:47.986281 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 00:56:47.986303 | orchestrator | Thursday 05 February 2026 00:50:49 +0000 (0:00:00.713) 0:04:52.150 ***** 2026-02-05 00:56:47.986311 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.986318 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.986324 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.986330 | orchestrator | 2026-02-05 00:56:47.986336 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 00:56:47.986342 | orchestrator | Thursday 05 February 2026 00:50:50 +0000 (0:00:00.557) 0:04:52.707 ***** 2026-02-05 00:56:47.986348 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.986354 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.986361 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.986367 | orchestrator | 2026-02-05 00:56:47.986373 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 00:56:47.986379 | orchestrator | Thursday 05 February 2026 00:50:50 +0000 (0:00:00.304) 0:04:53.011 ***** 2026-02-05 00:56:47.986392 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.986398 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.986403 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.986419 | orchestrator | 2026-02-05 00:56:47.986425 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 00:56:47.986436 | orchestrator | Thursday 05 February 2026 00:50:51 +0000 (0:00:00.702) 0:04:53.714 ***** 2026-02-05 00:56:47.986448 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.986453 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.986459 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.986465 | orchestrator | 2026-02-05 00:56:47.986471 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 00:56:47.986477 | orchestrator | Thursday 05 February 2026 00:50:51 +0000 (0:00:00.710) 0:04:54.425 ***** 2026-02-05 00:56:47.986484 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.986490 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.986496 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.986502 | orchestrator | 2026-02-05 00:56:47.986508 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 00:56:47.986515 | orchestrator | Thursday 05 February 2026 00:50:52 +0000 (0:00:00.631) 0:04:55.056 ***** 2026-02-05 00:56:47.986520 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.986526 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.986532 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.986539 | orchestrator | 2026-02-05 00:56:47.986545 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 00:56:47.986551 | orchestrator | Thursday 05 February 2026 00:50:52 +0000 (0:00:00.339) 0:04:55.396 ***** 2026-02-05 00:56:47.986557 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.986563 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.986570 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.986576 | orchestrator | 2026-02-05 00:56:47.986581 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 00:56:47.986599 | orchestrator | Thursday 05 February 2026 00:50:53 +0000 (0:00:00.355) 0:04:55.752 ***** 2026-02-05 00:56:47.986606 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.986612 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.986619 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.986626 | orchestrator | 2026-02-05 00:56:47.986633 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 00:56:47.986640 | orchestrator | Thursday 05 February 2026 00:50:53 +0000 (0:00:00.324) 0:04:56.076 ***** 2026-02-05 00:56:47.986723 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.986734 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.986740 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.986746 | orchestrator | 2026-02-05 00:56:47.986753 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 00:56:47.986759 | orchestrator | Thursday 05 February 2026 00:50:54 +0000 (0:00:00.540) 0:04:56.616 ***** 2026-02-05 00:56:47.986766 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.986773 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.986779 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.986786 | orchestrator | 2026-02-05 00:56:47.986793 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 00:56:47.986800 | orchestrator | Thursday 05 February 2026 00:50:54 +0000 (0:00:00.246) 0:04:56.862 ***** 2026-02-05 00:56:47.986807 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.986813 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.986820 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.986826 | orchestrator | 2026-02-05 00:56:47.986833 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 00:56:47.986840 | orchestrator | Thursday 05 February 2026 00:50:54 +0000 (0:00:00.230) 0:04:57.093 ***** 2026-02-05 00:56:47.986852 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.986859 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.986874 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.986881 | orchestrator | 2026-02-05 00:56:47.986888 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 00:56:47.986895 | orchestrator | Thursday 05 February 2026 00:50:54 +0000 (0:00:00.317) 0:04:57.411 ***** 2026-02-05 00:56:47.986901 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.986908 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.986914 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.986921 | orchestrator | 2026-02-05 00:56:47.986928 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 00:56:47.986934 | orchestrator | Thursday 05 February 2026 00:50:55 +0000 (0:00:00.480) 0:04:57.892 ***** 2026-02-05 00:56:47.986941 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.986947 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.986953 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.986969 | orchestrator | 2026-02-05 00:56:47.986976 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-05 00:56:47.986982 | orchestrator | Thursday 05 February 2026 00:50:55 +0000 (0:00:00.496) 0:04:58.388 ***** 2026-02-05 00:56:47.986989 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 00:56:47.986995 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:56:47.987001 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:56:47.987008 | orchestrator | 2026-02-05 00:56:47.987015 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-05 00:56:47.987021 | orchestrator | Thursday 05 February 2026 00:50:56 +0000 (0:00:00.561) 0:04:58.950 ***** 2026-02-05 00:56:47.987028 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.987035 | orchestrator | 2026-02-05 00:56:47.987042 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-05 00:56:47.987048 | orchestrator | Thursday 05 February 2026 00:50:57 +0000 (0:00:00.612) 0:04:59.562 ***** 2026-02-05 00:56:47.987054 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.987060 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.987066 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.987072 | orchestrator | 2026-02-05 00:56:47.987079 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-05 00:56:47.987085 | orchestrator | Thursday 05 February 2026 00:50:57 +0000 (0:00:00.719) 0:05:00.281 ***** 2026-02-05 00:56:47.987092 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.987098 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.987105 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.987111 | orchestrator | 2026-02-05 00:56:47.987117 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-05 00:56:47.987123 | orchestrator | Thursday 05 February 2026 00:50:58 +0000 (0:00:00.276) 0:05:00.557 ***** 2026-02-05 00:56:47.987130 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 00:56:47.987136 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 00:56:47.987142 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 00:56:47.987149 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-05 00:56:47.987155 | orchestrator | 2026-02-05 00:56:47.987161 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-05 00:56:47.987168 | orchestrator | Thursday 05 February 2026 00:51:09 +0000 (0:00:11.380) 0:05:11.937 ***** 2026-02-05 00:56:47.987184 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.987223 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.987230 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.987237 | orchestrator | 2026-02-05 00:56:47.987265 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-05 00:56:47.987272 | orchestrator | Thursday 05 February 2026 00:51:09 +0000 (0:00:00.313) 0:05:12.251 ***** 2026-02-05 00:56:47.987285 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-05 00:56:47.987292 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-05 00:56:47.987298 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-05 00:56:47.987304 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-05 00:56:47.987311 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:56:47.987324 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:56:47.987330 | orchestrator | 2026-02-05 00:56:47.987337 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-05 00:56:47.987344 | orchestrator | Thursday 05 February 2026 00:51:12 +0000 (0:00:02.560) 0:05:14.811 ***** 2026-02-05 00:56:47.987350 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-05 00:56:47.987356 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-05 00:56:47.987362 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-05 00:56:47.987369 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 00:56:47.987374 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-05 00:56:47.987380 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-05 00:56:47.987386 | orchestrator | 2026-02-05 00:56:47.987393 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-05 00:56:47.987399 | orchestrator | Thursday 05 February 2026 00:51:13 +0000 (0:00:01.351) 0:05:16.162 ***** 2026-02-05 00:56:47.987405 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.987412 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.987419 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.987426 | orchestrator | 2026-02-05 00:56:47.987433 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-05 00:56:47.987439 | orchestrator | Thursday 05 February 2026 00:51:14 +0000 (0:00:00.723) 0:05:16.885 ***** 2026-02-05 00:56:47.987446 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.987453 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.987459 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.987466 | orchestrator | 2026-02-05 00:56:47.987477 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-05 00:56:47.987483 | orchestrator | Thursday 05 February 2026 00:51:14 +0000 (0:00:00.273) 0:05:17.159 ***** 2026-02-05 00:56:47.987490 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.987496 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.987503 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.987509 | orchestrator | 2026-02-05 00:56:47.987514 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-05 00:56:47.987521 | orchestrator | Thursday 05 February 2026 00:51:14 +0000 (0:00:00.265) 0:05:17.424 ***** 2026-02-05 00:56:47.987527 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.987534 | orchestrator | 2026-02-05 00:56:47.987540 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-05 00:56:47.987547 | orchestrator | Thursday 05 February 2026 00:51:15 +0000 (0:00:00.651) 0:05:18.076 ***** 2026-02-05 00:56:47.987553 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.987559 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.987565 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.987573 | orchestrator | 2026-02-05 00:56:47.987579 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-05 00:56:47.987585 | orchestrator | Thursday 05 February 2026 00:51:15 +0000 (0:00:00.266) 0:05:18.342 ***** 2026-02-05 00:56:47.987592 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.987598 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.987605 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.987611 | orchestrator | 2026-02-05 00:56:47.987618 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-05 00:56:47.987625 | orchestrator | Thursday 05 February 2026 00:51:16 +0000 (0:00:00.272) 0:05:18.615 ***** 2026-02-05 00:56:47.987637 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.987643 | orchestrator | 2026-02-05 00:56:47.987650 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-05 00:56:47.987671 | orchestrator | Thursday 05 February 2026 00:51:16 +0000 (0:00:00.622) 0:05:19.237 ***** 2026-02-05 00:56:47.987678 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.987685 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.987691 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.987697 | orchestrator | 2026-02-05 00:56:47.987704 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-05 00:56:47.987710 | orchestrator | Thursday 05 February 2026 00:51:17 +0000 (0:00:01.151) 0:05:20.389 ***** 2026-02-05 00:56:47.987717 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.987723 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.987729 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.987750 | orchestrator | 2026-02-05 00:56:47.987757 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-05 00:56:47.987762 | orchestrator | Thursday 05 February 2026 00:51:19 +0000 (0:00:01.190) 0:05:21.579 ***** 2026-02-05 00:56:47.987767 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.987773 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.987779 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.987784 | orchestrator | 2026-02-05 00:56:47.987790 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-05 00:56:47.987795 | orchestrator | Thursday 05 February 2026 00:51:21 +0000 (0:00:01.924) 0:05:23.504 ***** 2026-02-05 00:56:47.987830 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.987836 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.987842 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.987848 | orchestrator | 2026-02-05 00:56:47.987854 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-05 00:56:47.987860 | orchestrator | Thursday 05 February 2026 00:51:23 +0000 (0:00:02.063) 0:05:25.568 ***** 2026-02-05 00:56:47.987866 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.987873 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.987879 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-05 00:56:47.987885 | orchestrator | 2026-02-05 00:56:47.987890 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-05 00:56:47.987896 | orchestrator | Thursday 05 February 2026 00:51:23 +0000 (0:00:00.353) 0:05:25.922 ***** 2026-02-05 00:56:47.987909 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-05 00:56:47.987916 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-05 00:56:47.987922 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-05 00:56:47.987928 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-05 00:56:47.987935 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-02-05 00:56:47.987940 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-02-05 00:56:47.987946 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:56:47.987952 | orchestrator | 2026-02-05 00:56:47.987958 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-05 00:56:47.987964 | orchestrator | Thursday 05 February 2026 00:52:00 +0000 (0:00:37.161) 0:06:03.083 ***** 2026-02-05 00:56:47.987970 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:56:47.987977 | orchestrator | 2026-02-05 00:56:47.987983 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-05 00:56:47.988000 | orchestrator | Thursday 05 February 2026 00:52:01 +0000 (0:00:01.306) 0:06:04.389 ***** 2026-02-05 00:56:47.988007 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.988014 | orchestrator | 2026-02-05 00:56:47.988025 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-05 00:56:47.988032 | orchestrator | Thursday 05 February 2026 00:52:02 +0000 (0:00:00.285) 0:06:04.675 ***** 2026-02-05 00:56:47.988038 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.988045 | orchestrator | 2026-02-05 00:56:47.988051 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-05 00:56:47.988057 | orchestrator | Thursday 05 February 2026 00:52:02 +0000 (0:00:00.281) 0:06:04.956 ***** 2026-02-05 00:56:47.988063 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-05 00:56:47.988069 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-05 00:56:47.988075 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-05 00:56:47.988081 | orchestrator | 2026-02-05 00:56:47.988087 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-05 00:56:47.988093 | orchestrator | Thursday 05 February 2026 00:52:08 +0000 (0:00:06.431) 0:06:11.388 ***** 2026-02-05 00:56:47.988100 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-05 00:56:47.988107 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-05 00:56:47.988113 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-05 00:56:47.988120 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-05 00:56:47.988126 | orchestrator | 2026-02-05 00:56:47.988143 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-05 00:56:47.988150 | orchestrator | Thursday 05 February 2026 00:52:13 +0000 (0:00:04.804) 0:06:16.192 ***** 2026-02-05 00:56:47.988157 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.988163 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.988170 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.988176 | orchestrator | 2026-02-05 00:56:47.988183 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-05 00:56:47.988189 | orchestrator | Thursday 05 February 2026 00:52:14 +0000 (0:00:00.607) 0:06:16.799 ***** 2026-02-05 00:56:47.988196 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.988203 | orchestrator | 2026-02-05 00:56:47.988209 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-05 00:56:47.988216 | orchestrator | Thursday 05 February 2026 00:52:14 +0000 (0:00:00.664) 0:06:17.463 ***** 2026-02-05 00:56:47.988223 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.988229 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.988236 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.988242 | orchestrator | 2026-02-05 00:56:47.988249 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-05 00:56:47.988255 | orchestrator | Thursday 05 February 2026 00:52:15 +0000 (0:00:00.282) 0:06:17.745 ***** 2026-02-05 00:56:47.988262 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.988268 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.988275 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.988281 | orchestrator | 2026-02-05 00:56:47.988287 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-05 00:56:47.988293 | orchestrator | Thursday 05 February 2026 00:52:16 +0000 (0:00:01.083) 0:06:18.829 ***** 2026-02-05 00:56:47.988300 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-05 00:56:47.988306 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-05 00:56:47.988313 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-05 00:56:47.988325 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.988332 | orchestrator | 2026-02-05 00:56:47.988338 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-05 00:56:47.988345 | orchestrator | Thursday 05 February 2026 00:52:17 +0000 (0:00:00.827) 0:06:19.657 ***** 2026-02-05 00:56:47.988352 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.988358 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.988365 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.988372 | orchestrator | 2026-02-05 00:56:47.988378 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-05 00:56:47.988385 | orchestrator | 2026-02-05 00:56:47.988392 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 00:56:47.988405 | orchestrator | Thursday 05 February 2026 00:52:17 +0000 (0:00:00.788) 0:06:20.445 ***** 2026-02-05 00:56:47.988411 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.988417 | orchestrator | 2026-02-05 00:56:47.988424 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 00:56:47.988429 | orchestrator | Thursday 05 February 2026 00:52:18 +0000 (0:00:00.528) 0:06:20.974 ***** 2026-02-05 00:56:47.988436 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.988442 | orchestrator | 2026-02-05 00:56:47.988448 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 00:56:47.988453 | orchestrator | Thursday 05 February 2026 00:52:19 +0000 (0:00:00.722) 0:06:21.696 ***** 2026-02-05 00:56:47.988460 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.988466 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.988472 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.988478 | orchestrator | 2026-02-05 00:56:47.988485 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 00:56:47.988491 | orchestrator | Thursday 05 February 2026 00:52:19 +0000 (0:00:00.291) 0:06:21.988 ***** 2026-02-05 00:56:47.988498 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.988504 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.988511 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.988518 | orchestrator | 2026-02-05 00:56:47.988528 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 00:56:47.988535 | orchestrator | Thursday 05 February 2026 00:52:20 +0000 (0:00:00.714) 0:06:22.703 ***** 2026-02-05 00:56:47.988542 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.988549 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.988556 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.988562 | orchestrator | 2026-02-05 00:56:47.988568 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 00:56:47.988574 | orchestrator | Thursday 05 February 2026 00:52:21 +0000 (0:00:00.978) 0:06:23.681 ***** 2026-02-05 00:56:47.988581 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.988588 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.988594 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.988601 | orchestrator | 2026-02-05 00:56:47.988607 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 00:56:47.988614 | orchestrator | Thursday 05 February 2026 00:52:21 +0000 (0:00:00.676) 0:06:24.358 ***** 2026-02-05 00:56:47.988620 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.988626 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.988633 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.988640 | orchestrator | 2026-02-05 00:56:47.988647 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 00:56:47.988667 | orchestrator | Thursday 05 February 2026 00:52:22 +0000 (0:00:00.331) 0:06:24.689 ***** 2026-02-05 00:56:47.988674 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.988681 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.988688 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.988700 | orchestrator | 2026-02-05 00:56:47.988707 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 00:56:47.988713 | orchestrator | Thursday 05 February 2026 00:52:22 +0000 (0:00:00.264) 0:06:24.953 ***** 2026-02-05 00:56:47.988720 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.988726 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.988732 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.988738 | orchestrator | 2026-02-05 00:56:47.988745 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 00:56:47.988752 | orchestrator | Thursday 05 February 2026 00:52:22 +0000 (0:00:00.426) 0:06:25.380 ***** 2026-02-05 00:56:47.988758 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.988764 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.988771 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.988777 | orchestrator | 2026-02-05 00:56:47.988783 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 00:56:47.988790 | orchestrator | Thursday 05 February 2026 00:52:23 +0000 (0:00:00.735) 0:06:26.115 ***** 2026-02-05 00:56:47.988796 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.988802 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.988808 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.988814 | orchestrator | 2026-02-05 00:56:47.988821 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 00:56:47.988827 | orchestrator | Thursday 05 February 2026 00:52:24 +0000 (0:00:00.618) 0:06:26.733 ***** 2026-02-05 00:56:47.988834 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.988841 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.988848 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.988854 | orchestrator | 2026-02-05 00:56:47.988860 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 00:56:47.988867 | orchestrator | Thursday 05 February 2026 00:52:24 +0000 (0:00:00.284) 0:06:27.017 ***** 2026-02-05 00:56:47.988872 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.988878 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.988885 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.988892 | orchestrator | 2026-02-05 00:56:47.988898 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 00:56:47.988905 | orchestrator | Thursday 05 February 2026 00:52:24 +0000 (0:00:00.419) 0:06:27.437 ***** 2026-02-05 00:56:47.988911 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.988918 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.988924 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.988931 | orchestrator | 2026-02-05 00:56:47.988937 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 00:56:47.988944 | orchestrator | Thursday 05 February 2026 00:52:25 +0000 (0:00:00.319) 0:06:27.756 ***** 2026-02-05 00:56:47.988950 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.988957 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.988964 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.988970 | orchestrator | 2026-02-05 00:56:47.988977 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 00:56:47.988989 | orchestrator | Thursday 05 February 2026 00:52:25 +0000 (0:00:00.350) 0:06:28.107 ***** 2026-02-05 00:56:47.988996 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.989002 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.989008 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.989014 | orchestrator | 2026-02-05 00:56:47.989020 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 00:56:47.989027 | orchestrator | Thursday 05 February 2026 00:52:25 +0000 (0:00:00.269) 0:06:28.376 ***** 2026-02-05 00:56:47.989034 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.989041 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.989047 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.989053 | orchestrator | 2026-02-05 00:56:47.989060 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 00:56:47.989072 | orchestrator | Thursday 05 February 2026 00:52:26 +0000 (0:00:00.426) 0:06:28.802 ***** 2026-02-05 00:56:47.989078 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.989084 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.989090 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.989097 | orchestrator | 2026-02-05 00:56:47.989104 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 00:56:47.989110 | orchestrator | Thursday 05 February 2026 00:52:26 +0000 (0:00:00.258) 0:06:29.061 ***** 2026-02-05 00:56:47.989116 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.989123 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.989129 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.989135 | orchestrator | 2026-02-05 00:56:47.989141 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 00:56:47.989152 | orchestrator | Thursday 05 February 2026 00:52:26 +0000 (0:00:00.252) 0:06:29.314 ***** 2026-02-05 00:56:47.989158 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.989165 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.989171 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.989177 | orchestrator | 2026-02-05 00:56:47.989183 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 00:56:47.989189 | orchestrator | Thursday 05 February 2026 00:52:27 +0000 (0:00:00.292) 0:06:29.607 ***** 2026-02-05 00:56:47.989195 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.989202 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.989208 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.989214 | orchestrator | 2026-02-05 00:56:47.989220 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-05 00:56:47.989227 | orchestrator | Thursday 05 February 2026 00:52:27 +0000 (0:00:00.609) 0:06:30.216 ***** 2026-02-05 00:56:47.989233 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.989240 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.989246 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.989252 | orchestrator | 2026-02-05 00:56:47.989259 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-05 00:56:47.989266 | orchestrator | Thursday 05 February 2026 00:52:28 +0000 (0:00:00.293) 0:06:30.510 ***** 2026-02-05 00:56:47.989273 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 00:56:47.989279 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:56:47.989286 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:56:47.989292 | orchestrator | 2026-02-05 00:56:47.989299 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-05 00:56:47.989306 | orchestrator | Thursday 05 February 2026 00:52:28 +0000 (0:00:00.698) 0:06:31.208 ***** 2026-02-05 00:56:47.989312 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.989318 | orchestrator | 2026-02-05 00:56:47.989325 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-05 00:56:47.989332 | orchestrator | Thursday 05 February 2026 00:52:29 +0000 (0:00:00.612) 0:06:31.821 ***** 2026-02-05 00:56:47.989339 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.989346 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.989352 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.989359 | orchestrator | 2026-02-05 00:56:47.989365 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-05 00:56:47.989372 | orchestrator | Thursday 05 February 2026 00:52:29 +0000 (0:00:00.270) 0:06:32.091 ***** 2026-02-05 00:56:47.989377 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.989383 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.989389 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.989396 | orchestrator | 2026-02-05 00:56:47.989403 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-05 00:56:47.989417 | orchestrator | Thursday 05 February 2026 00:52:29 +0000 (0:00:00.274) 0:06:32.366 ***** 2026-02-05 00:56:47.989424 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.989430 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.989437 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.989444 | orchestrator | 2026-02-05 00:56:47.989450 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-05 00:56:47.989456 | orchestrator | Thursday 05 February 2026 00:52:30 +0000 (0:00:00.652) 0:06:33.019 ***** 2026-02-05 00:56:47.989462 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.989469 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.989475 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.989481 | orchestrator | 2026-02-05 00:56:47.989487 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-05 00:56:47.989493 | orchestrator | Thursday 05 February 2026 00:52:31 +0000 (0:00:00.473) 0:06:33.492 ***** 2026-02-05 00:56:47.989498 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-05 00:56:47.989503 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-05 00:56:47.989509 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-05 00:56:47.989515 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-05 00:56:47.989526 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-05 00:56:47.989533 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-05 00:56:47.989538 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-05 00:56:47.989544 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-05 00:56:47.989549 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-05 00:56:47.989555 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-05 00:56:47.989561 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-05 00:56:47.989567 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-05 00:56:47.989573 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-05 00:56:47.989578 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-05 00:56:47.989584 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-05 00:56:47.989590 | orchestrator | 2026-02-05 00:56:47.989596 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-05 00:56:47.989606 | orchestrator | Thursday 05 February 2026 00:52:34 +0000 (0:00:03.349) 0:06:36.842 ***** 2026-02-05 00:56:47.989613 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.989619 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.989624 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.989630 | orchestrator | 2026-02-05 00:56:47.989635 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-05 00:56:47.989641 | orchestrator | Thursday 05 February 2026 00:52:34 +0000 (0:00:00.282) 0:06:37.124 ***** 2026-02-05 00:56:47.989647 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.989687 | orchestrator | 2026-02-05 00:56:47.989695 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-05 00:56:47.989701 | orchestrator | Thursday 05 February 2026 00:52:35 +0000 (0:00:00.692) 0:06:37.817 ***** 2026-02-05 00:56:47.989707 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-05 00:56:47.989713 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-05 00:56:47.989724 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-05 00:56:47.989730 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-05 00:56:47.989736 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-05 00:56:47.989743 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-05 00:56:47.989748 | orchestrator | 2026-02-05 00:56:47.989754 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-05 00:56:47.989759 | orchestrator | Thursday 05 February 2026 00:52:36 +0000 (0:00:01.008) 0:06:38.825 ***** 2026-02-05 00:56:47.989765 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:56:47.989771 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 00:56:47.989777 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 00:56:47.989783 | orchestrator | 2026-02-05 00:56:47.989788 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-05 00:56:47.989793 | orchestrator | Thursday 05 February 2026 00:52:38 +0000 (0:00:02.023) 0:06:40.849 ***** 2026-02-05 00:56:47.989799 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 00:56:47.989805 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 00:56:47.989812 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.989818 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 00:56:47.989825 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-05 00:56:47.989832 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.989838 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 00:56:47.989843 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-05 00:56:47.989849 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.989855 | orchestrator | 2026-02-05 00:56:47.989862 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-05 00:56:47.989868 | orchestrator | Thursday 05 February 2026 00:52:39 +0000 (0:00:01.088) 0:06:41.937 ***** 2026-02-05 00:56:47.989873 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:56:47.989879 | orchestrator | 2026-02-05 00:56:47.989885 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-05 00:56:47.989891 | orchestrator | Thursday 05 February 2026 00:52:42 +0000 (0:00:02.701) 0:06:44.638 ***** 2026-02-05 00:56:47.989897 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.989903 | orchestrator | 2026-02-05 00:56:47.989910 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-05 00:56:47.989916 | orchestrator | Thursday 05 February 2026 00:52:42 +0000 (0:00:00.766) 0:06:45.405 ***** 2026-02-05 00:56:47.989923 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c', 'data_vg': 'ceph-50aca8a8-e8e5-56ca-ab64-02beaf30ee0c'}) 2026-02-05 00:56:47.989930 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f', 'data_vg': 'ceph-9bc271eb-ec29-52a2-8b95-ff4dfb27e19f'}) 2026-02-05 00:56:47.989943 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-44714651-8fa8-5efe-842f-d8a32b49e267', 'data_vg': 'ceph-44714651-8fa8-5efe-842f-d8a32b49e267'}) 2026-02-05 00:56:47.989949 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a29ad6cb-22eb-5988-a460-3c83981a9937', 'data_vg': 'ceph-a29ad6cb-22eb-5988-a460-3c83981a9937'}) 2026-02-05 00:56:47.989955 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1b54f13f-3e23-5303-9525-7c2d84d571dd', 'data_vg': 'ceph-1b54f13f-3e23-5303-9525-7c2d84d571dd'}) 2026-02-05 00:56:47.989960 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685', 'data_vg': 'ceph-56069e6e-1b0b-5c3d-aabe-9f5e4e37a685'}) 2026-02-05 00:56:47.989966 | orchestrator | 2026-02-05 00:56:47.989979 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-05 00:56:47.989986 | orchestrator | Thursday 05 February 2026 00:53:25 +0000 (0:00:42.511) 0:07:27.917 ***** 2026-02-05 00:56:47.989992 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.989999 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.990005 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.990042 | orchestrator | 2026-02-05 00:56:47.990050 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-05 00:56:47.990057 | orchestrator | Thursday 05 February 2026 00:53:25 +0000 (0:00:00.322) 0:07:28.239 ***** 2026-02-05 00:56:47.990067 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.990073 | orchestrator | 2026-02-05 00:56:47.990079 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-05 00:56:47.990085 | orchestrator | Thursday 05 February 2026 00:53:26 +0000 (0:00:00.613) 0:07:28.853 ***** 2026-02-05 00:56:47.990093 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.990099 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.990105 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.990112 | orchestrator | 2026-02-05 00:56:47.990118 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-05 00:56:47.990124 | orchestrator | Thursday 05 February 2026 00:53:27 +0000 (0:00:00.641) 0:07:29.495 ***** 2026-02-05 00:56:47.990130 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.990137 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.990143 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.990150 | orchestrator | 2026-02-05 00:56:47.990158 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-05 00:56:47.990164 | orchestrator | Thursday 05 February 2026 00:53:29 +0000 (0:00:02.611) 0:07:32.106 ***** 2026-02-05 00:56:47.990170 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.990177 | orchestrator | 2026-02-05 00:56:47.990183 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-05 00:56:47.990189 | orchestrator | Thursday 05 February 2026 00:53:30 +0000 (0:00:00.598) 0:07:32.705 ***** 2026-02-05 00:56:47.990195 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.990201 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.990208 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.990214 | orchestrator | 2026-02-05 00:56:47.990220 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-05 00:56:47.990225 | orchestrator | Thursday 05 February 2026 00:53:31 +0000 (0:00:01.089) 0:07:33.794 ***** 2026-02-05 00:56:47.990231 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.990236 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.990241 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.990248 | orchestrator | 2026-02-05 00:56:47.990253 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-05 00:56:47.990260 | orchestrator | Thursday 05 February 2026 00:53:32 +0000 (0:00:01.171) 0:07:34.965 ***** 2026-02-05 00:56:47.990266 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.990272 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.990279 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.990285 | orchestrator | 2026-02-05 00:56:47.990290 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-05 00:56:47.990296 | orchestrator | Thursday 05 February 2026 00:53:34 +0000 (0:00:02.142) 0:07:37.108 ***** 2026-02-05 00:56:47.990302 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.990308 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.990315 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.990321 | orchestrator | 2026-02-05 00:56:47.990326 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-05 00:56:47.990332 | orchestrator | Thursday 05 February 2026 00:53:34 +0000 (0:00:00.325) 0:07:37.433 ***** 2026-02-05 00:56:47.990342 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.990348 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.990353 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.990358 | orchestrator | 2026-02-05 00:56:47.990364 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-05 00:56:47.990370 | orchestrator | Thursday 05 February 2026 00:53:35 +0000 (0:00:00.339) 0:07:37.772 ***** 2026-02-05 00:56:47.990376 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-05 00:56:47.990382 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-02-05 00:56:47.990390 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-02-05 00:56:47.990397 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-02-05 00:56:47.990403 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-02-05 00:56:47.990410 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-02-05 00:56:47.990417 | orchestrator | 2026-02-05 00:56:47.990423 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-05 00:56:47.990430 | orchestrator | Thursday 05 February 2026 00:53:36 +0000 (0:00:01.031) 0:07:38.804 ***** 2026-02-05 00:56:47.990437 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-05 00:56:47.990443 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-05 00:56:47.990455 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-02-05 00:56:47.990461 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-02-05 00:56:47.990467 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-02-05 00:56:47.990473 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-02-05 00:56:47.990479 | orchestrator | 2026-02-05 00:56:47.990486 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-05 00:56:47.990491 | orchestrator | Thursday 05 February 2026 00:53:38 +0000 (0:00:02.052) 0:07:40.856 ***** 2026-02-05 00:56:47.990498 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-02-05 00:56:47.990504 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-05 00:56:47.990511 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-05 00:56:47.990517 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-02-05 00:56:47.990522 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-02-05 00:56:47.990529 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-02-05 00:56:47.990534 | orchestrator | 2026-02-05 00:56:47.990541 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-05 00:56:47.990547 | orchestrator | Thursday 05 February 2026 00:53:42 +0000 (0:00:03.744) 0:07:44.601 ***** 2026-02-05 00:56:47.990553 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.990559 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.990565 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:56:47.990572 | orchestrator | 2026-02-05 00:56:47.990577 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-05 00:56:47.990583 | orchestrator | Thursday 05 February 2026 00:53:44 +0000 (0:00:02.474) 0:07:47.075 ***** 2026-02-05 00:56:47.990593 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.990599 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.990605 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-05 00:56:47.990611 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:56:47.990617 | orchestrator | 2026-02-05 00:56:47.990623 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-05 00:56:47.990629 | orchestrator | Thursday 05 February 2026 00:53:56 +0000 (0:00:12.343) 0:07:59.418 ***** 2026-02-05 00:56:47.990635 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.990642 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.990648 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.990669 | orchestrator | 2026-02-05 00:56:47.990677 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-05 00:56:47.990683 | orchestrator | Thursday 05 February 2026 00:53:58 +0000 (0:00:01.269) 0:08:00.687 ***** 2026-02-05 00:56:47.990695 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.990700 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.990707 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.990713 | orchestrator | 2026-02-05 00:56:47.990719 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-05 00:56:47.990725 | orchestrator | Thursday 05 February 2026 00:53:58 +0000 (0:00:00.334) 0:08:01.022 ***** 2026-02-05 00:56:47.990731 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.990737 | orchestrator | 2026-02-05 00:56:47.990743 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-05 00:56:47.990749 | orchestrator | Thursday 05 February 2026 00:53:59 +0000 (0:00:00.750) 0:08:01.772 ***** 2026-02-05 00:56:47.990754 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:56:47.990760 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:56:47.990766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:56:47.990771 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.990776 | orchestrator | 2026-02-05 00:56:47.990782 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-05 00:56:47.990788 | orchestrator | Thursday 05 February 2026 00:53:59 +0000 (0:00:00.377) 0:08:02.149 ***** 2026-02-05 00:56:47.990793 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.990799 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.990804 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.990809 | orchestrator | 2026-02-05 00:56:47.990815 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-05 00:56:47.990821 | orchestrator | Thursday 05 February 2026 00:54:00 +0000 (0:00:00.334) 0:08:02.485 ***** 2026-02-05 00:56:47.990827 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.990832 | orchestrator | 2026-02-05 00:56:47.990838 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-05 00:56:47.990844 | orchestrator | Thursday 05 February 2026 00:54:00 +0000 (0:00:00.232) 0:08:02.717 ***** 2026-02-05 00:56:47.990850 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.990856 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.990862 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.990868 | orchestrator | 2026-02-05 00:56:47.990874 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-05 00:56:47.990882 | orchestrator | Thursday 05 February 2026 00:54:00 +0000 (0:00:00.330) 0:08:03.048 ***** 2026-02-05 00:56:47.990890 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.990896 | orchestrator | 2026-02-05 00:56:47.990902 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-05 00:56:47.990909 | orchestrator | Thursday 05 February 2026 00:54:01 +0000 (0:00:00.680) 0:08:03.729 ***** 2026-02-05 00:56:47.990918 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.990925 | orchestrator | 2026-02-05 00:56:47.990932 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-05 00:56:47.990939 | orchestrator | Thursday 05 February 2026 00:54:01 +0000 (0:00:00.233) 0:08:03.962 ***** 2026-02-05 00:56:47.990945 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.990952 | orchestrator | 2026-02-05 00:56:47.990959 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-05 00:56:47.990966 | orchestrator | Thursday 05 February 2026 00:54:01 +0000 (0:00:00.133) 0:08:04.096 ***** 2026-02-05 00:56:47.990979 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.990987 | orchestrator | 2026-02-05 00:56:47.990995 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-05 00:56:47.991002 | orchestrator | Thursday 05 February 2026 00:54:01 +0000 (0:00:00.210) 0:08:04.306 ***** 2026-02-05 00:56:47.991010 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.991017 | orchestrator | 2026-02-05 00:56:47.991023 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-05 00:56:47.991038 | orchestrator | Thursday 05 February 2026 00:54:02 +0000 (0:00:00.278) 0:08:04.585 ***** 2026-02-05 00:56:47.991044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:56:47.991051 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:56:47.991058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:56:47.991066 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.991072 | orchestrator | 2026-02-05 00:56:47.991078 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-05 00:56:47.991085 | orchestrator | Thursday 05 February 2026 00:54:02 +0000 (0:00:00.423) 0:08:05.008 ***** 2026-02-05 00:56:47.991092 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.991099 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.991106 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.991114 | orchestrator | 2026-02-05 00:56:47.991121 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-05 00:56:47.991129 | orchestrator | Thursday 05 February 2026 00:54:02 +0000 (0:00:00.344) 0:08:05.353 ***** 2026-02-05 00:56:47.991136 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.991144 | orchestrator | 2026-02-05 00:56:47.991155 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-05 00:56:47.991163 | orchestrator | Thursday 05 February 2026 00:54:03 +0000 (0:00:00.208) 0:08:05.561 ***** 2026-02-05 00:56:47.991170 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.991176 | orchestrator | 2026-02-05 00:56:47.991183 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-05 00:56:47.991191 | orchestrator | 2026-02-05 00:56:47.991199 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 00:56:47.991205 | orchestrator | Thursday 05 February 2026 00:54:03 +0000 (0:00:00.913) 0:08:06.475 ***** 2026-02-05 00:56:47.991211 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.991218 | orchestrator | 2026-02-05 00:56:47.991225 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 00:56:47.991232 | orchestrator | Thursday 05 February 2026 00:54:05 +0000 (0:00:01.176) 0:08:07.652 ***** 2026-02-05 00:56:47.991238 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-5, testbed-node-4, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.991244 | orchestrator | 2026-02-05 00:56:47.991251 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 00:56:47.991259 | orchestrator | Thursday 05 February 2026 00:54:06 +0000 (0:00:01.275) 0:08:08.928 ***** 2026-02-05 00:56:47.991268 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.991276 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.991283 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.991290 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.991297 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.991303 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.991309 | orchestrator | 2026-02-05 00:56:47.991315 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 00:56:47.991322 | orchestrator | Thursday 05 February 2026 00:54:07 +0000 (0:00:01.068) 0:08:09.996 ***** 2026-02-05 00:56:47.991329 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.991334 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.991340 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.991346 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.991352 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.991357 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.991363 | orchestrator | 2026-02-05 00:56:47.991368 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 00:56:47.991380 | orchestrator | Thursday 05 February 2026 00:54:08 +0000 (0:00:00.896) 0:08:10.893 ***** 2026-02-05 00:56:47.991385 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.991391 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.991398 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.991403 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.991409 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.991414 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.991420 | orchestrator | 2026-02-05 00:56:47.991426 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 00:56:47.991432 | orchestrator | Thursday 05 February 2026 00:54:09 +0000 (0:00:00.776) 0:08:11.669 ***** 2026-02-05 00:56:47.991437 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.991443 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.991449 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.991455 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.991460 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.991465 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.991471 | orchestrator | 2026-02-05 00:56:47.991476 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 00:56:47.991483 | orchestrator | Thursday 05 February 2026 00:54:09 +0000 (0:00:00.689) 0:08:12.359 ***** 2026-02-05 00:56:47.991488 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.991493 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.991500 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.991506 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.991513 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.991519 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.991526 | orchestrator | 2026-02-05 00:56:47.991532 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 00:56:47.991545 | orchestrator | Thursday 05 February 2026 00:54:10 +0000 (0:00:00.930) 0:08:13.289 ***** 2026-02-05 00:56:47.991553 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.991559 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.991566 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.991571 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.991577 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.991583 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.991590 | orchestrator | 2026-02-05 00:56:47.991596 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 00:56:47.991602 | orchestrator | Thursday 05 February 2026 00:54:11 +0000 (0:00:00.612) 0:08:13.902 ***** 2026-02-05 00:56:47.991609 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.991614 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.991620 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.991625 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.991632 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.991638 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.991645 | orchestrator | 2026-02-05 00:56:47.991652 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 00:56:47.991694 | orchestrator | Thursday 05 February 2026 00:54:11 +0000 (0:00:00.479) 0:08:14.381 ***** 2026-02-05 00:56:47.991701 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.991707 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.991714 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.991720 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.991727 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.991732 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.991738 | orchestrator | 2026-02-05 00:56:47.991743 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 00:56:47.991754 | orchestrator | Thursday 05 February 2026 00:54:13 +0000 (0:00:01.181) 0:08:15.563 ***** 2026-02-05 00:56:47.991760 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.991766 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.991772 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.991783 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.991789 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.991796 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.991802 | orchestrator | 2026-02-05 00:56:47.991808 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 00:56:47.991814 | orchestrator | Thursday 05 February 2026 00:54:14 +0000 (0:00:00.931) 0:08:16.495 ***** 2026-02-05 00:56:47.991821 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.991826 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.991832 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.991838 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.991845 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.991851 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.991857 | orchestrator | 2026-02-05 00:56:47.991863 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 00:56:47.991870 | orchestrator | Thursday 05 February 2026 00:54:14 +0000 (0:00:00.619) 0:08:17.115 ***** 2026-02-05 00:56:47.991876 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.991882 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.991888 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.991894 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.991900 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.991906 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.991911 | orchestrator | 2026-02-05 00:56:47.991917 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 00:56:47.991923 | orchestrator | Thursday 05 February 2026 00:54:15 +0000 (0:00:00.555) 0:08:17.671 ***** 2026-02-05 00:56:47.991929 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.991935 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.991941 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.991948 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.991954 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.991960 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.991966 | orchestrator | 2026-02-05 00:56:47.991972 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 00:56:47.991978 | orchestrator | Thursday 05 February 2026 00:54:16 +0000 (0:00:00.852) 0:08:18.523 ***** 2026-02-05 00:56:47.991984 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.991991 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.991997 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.992003 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.992009 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.992016 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.992021 | orchestrator | 2026-02-05 00:56:47.992027 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 00:56:47.992033 | orchestrator | Thursday 05 February 2026 00:54:16 +0000 (0:00:00.599) 0:08:19.123 ***** 2026-02-05 00:56:47.992039 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.992045 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.992051 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.992058 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.992065 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.992071 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.992077 | orchestrator | 2026-02-05 00:56:47.992083 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 00:56:47.992089 | orchestrator | Thursday 05 February 2026 00:54:17 +0000 (0:00:00.847) 0:08:19.970 ***** 2026-02-05 00:56:47.992095 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.992102 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.992109 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.992115 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.992122 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.992128 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.992134 | orchestrator | 2026-02-05 00:56:47.992139 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 00:56:47.992151 | orchestrator | Thursday 05 February 2026 00:54:18 +0000 (0:00:00.613) 0:08:20.584 ***** 2026-02-05 00:56:47.992157 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.992163 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.992169 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.992175 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:56:47.992180 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:56:47.992186 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:56:47.992192 | orchestrator | 2026-02-05 00:56:47.992198 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 00:56:47.992212 | orchestrator | Thursday 05 February 2026 00:54:18 +0000 (0:00:00.706) 0:08:21.290 ***** 2026-02-05 00:56:47.992220 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.992226 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.992233 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.992239 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.992246 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.992252 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.992258 | orchestrator | 2026-02-05 00:56:47.992265 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 00:56:47.992271 | orchestrator | Thursday 05 February 2026 00:54:19 +0000 (0:00:00.515) 0:08:21.806 ***** 2026-02-05 00:56:47.992278 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.992284 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.992291 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.992297 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.992303 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.992309 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.992316 | orchestrator | 2026-02-05 00:56:47.992323 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 00:56:47.992329 | orchestrator | Thursday 05 February 2026 00:54:19 +0000 (0:00:00.508) 0:08:22.315 ***** 2026-02-05 00:56:47.992336 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.992342 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.992348 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.992354 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.992360 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.992367 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.992373 | orchestrator | 2026-02-05 00:56:47.992379 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-05 00:56:47.992390 | orchestrator | Thursday 05 February 2026 00:54:20 +0000 (0:00:01.099) 0:08:23.415 ***** 2026-02-05 00:56:47.992397 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:56:47.992403 | orchestrator | 2026-02-05 00:56:47.992409 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-05 00:56:47.992416 | orchestrator | Thursday 05 February 2026 00:54:25 +0000 (0:00:04.237) 0:08:27.653 ***** 2026-02-05 00:56:47.992423 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:56:47.992429 | orchestrator | 2026-02-05 00:56:47.992435 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-05 00:56:47.992442 | orchestrator | Thursday 05 February 2026 00:54:27 +0000 (0:00:02.351) 0:08:30.004 ***** 2026-02-05 00:56:47.992448 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.992454 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.992460 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.992467 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.992473 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.992478 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.992485 | orchestrator | 2026-02-05 00:56:47.992491 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-05 00:56:47.992496 | orchestrator | Thursday 05 February 2026 00:54:29 +0000 (0:00:01.865) 0:08:31.869 ***** 2026-02-05 00:56:47.992502 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.992513 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.992519 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.992524 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.992530 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.992536 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.992542 | orchestrator | 2026-02-05 00:56:47.992547 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-05 00:56:47.992554 | orchestrator | Thursday 05 February 2026 00:54:30 +0000 (0:00:01.215) 0:08:33.085 ***** 2026-02-05 00:56:47.992561 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.992567 | orchestrator | 2026-02-05 00:56:47.992573 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-05 00:56:47.992579 | orchestrator | Thursday 05 February 2026 00:54:31 +0000 (0:00:01.006) 0:08:34.091 ***** 2026-02-05 00:56:47.992585 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.992591 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.992597 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.992603 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.992609 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.992614 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.992620 | orchestrator | 2026-02-05 00:56:47.992626 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-05 00:56:47.992632 | orchestrator | Thursday 05 February 2026 00:54:33 +0000 (0:00:01.739) 0:08:35.831 ***** 2026-02-05 00:56:47.992638 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.992644 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.992649 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.992669 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.992675 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.992680 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.992686 | orchestrator | 2026-02-05 00:56:47.992692 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-05 00:56:47.992698 | orchestrator | Thursday 05 February 2026 00:54:36 +0000 (0:00:03.480) 0:08:39.312 ***** 2026-02-05 00:56:47.992703 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:56:47.992710 | orchestrator | 2026-02-05 00:56:47.992715 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-05 00:56:47.992721 | orchestrator | Thursday 05 February 2026 00:54:38 +0000 (0:00:01.332) 0:08:40.644 ***** 2026-02-05 00:56:47.992727 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.992733 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.992739 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.992746 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.992753 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.992759 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.992765 | orchestrator | 2026-02-05 00:56:47.992772 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-05 00:56:47.992784 | orchestrator | Thursday 05 February 2026 00:54:39 +0000 (0:00:00.866) 0:08:41.510 ***** 2026-02-05 00:56:47.992791 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.992798 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.992805 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.992812 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:56:47.992818 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:56:47.992824 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:56:47.992831 | orchestrator | 2026-02-05 00:56:47.992837 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-05 00:56:47.992843 | orchestrator | Thursday 05 February 2026 00:54:41 +0000 (0:00:02.613) 0:08:44.124 ***** 2026-02-05 00:56:47.992849 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.992860 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.992867 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.992873 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:56:47.992879 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:56:47.992885 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:56:47.992890 | orchestrator | 2026-02-05 00:56:47.992896 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-05 00:56:47.992902 | orchestrator | 2026-02-05 00:56:47.992908 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 00:56:47.992914 | orchestrator | Thursday 05 February 2026 00:54:42 +0000 (0:00:01.152) 0:08:45.276 ***** 2026-02-05 00:56:47.992920 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.992927 | orchestrator | 2026-02-05 00:56:47.992933 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 00:56:47.992944 | orchestrator | Thursday 05 February 2026 00:54:43 +0000 (0:00:00.745) 0:08:46.022 ***** 2026-02-05 00:56:47.992950 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.992955 | orchestrator | 2026-02-05 00:56:47.992961 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 00:56:47.992967 | orchestrator | Thursday 05 February 2026 00:54:44 +0000 (0:00:00.513) 0:08:46.536 ***** 2026-02-05 00:56:47.992974 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.992980 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.992986 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.992991 | orchestrator | 2026-02-05 00:56:47.992997 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 00:56:47.993003 | orchestrator | Thursday 05 February 2026 00:54:44 +0000 (0:00:00.297) 0:08:46.833 ***** 2026-02-05 00:56:47.993008 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.993014 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.993020 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.993027 | orchestrator | 2026-02-05 00:56:47.993034 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 00:56:47.993041 | orchestrator | Thursday 05 February 2026 00:54:45 +0000 (0:00:01.144) 0:08:47.978 ***** 2026-02-05 00:56:47.993047 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.993056 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.993063 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.993070 | orchestrator | 2026-02-05 00:56:47.993077 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 00:56:47.993083 | orchestrator | Thursday 05 February 2026 00:54:46 +0000 (0:00:00.790) 0:08:48.768 ***** 2026-02-05 00:56:47.993090 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.993098 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.993105 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.993111 | orchestrator | 2026-02-05 00:56:47.993117 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 00:56:47.993124 | orchestrator | Thursday 05 February 2026 00:54:47 +0000 (0:00:00.721) 0:08:49.490 ***** 2026-02-05 00:56:47.993132 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.993138 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.993145 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.993151 | orchestrator | 2026-02-05 00:56:47.993158 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 00:56:47.993165 | orchestrator | Thursday 05 February 2026 00:54:47 +0000 (0:00:00.282) 0:08:49.772 ***** 2026-02-05 00:56:47.993171 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.993178 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.993185 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.993191 | orchestrator | 2026-02-05 00:56:47.993198 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 00:56:47.993205 | orchestrator | Thursday 05 February 2026 00:54:47 +0000 (0:00:00.443) 0:08:50.216 ***** 2026-02-05 00:56:47.993218 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.993229 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.993241 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.993253 | orchestrator | 2026-02-05 00:56:47.993260 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 00:56:47.993267 | orchestrator | Thursday 05 February 2026 00:54:48 +0000 (0:00:00.276) 0:08:50.493 ***** 2026-02-05 00:56:47.993273 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.993279 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.993284 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.993290 | orchestrator | 2026-02-05 00:56:47.993296 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 00:56:47.993302 | orchestrator | Thursday 05 February 2026 00:54:48 +0000 (0:00:00.769) 0:08:51.262 ***** 2026-02-05 00:56:47.993308 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.993315 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.993321 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.993327 | orchestrator | 2026-02-05 00:56:47.993334 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 00:56:47.993340 | orchestrator | Thursday 05 February 2026 00:54:49 +0000 (0:00:00.825) 0:08:52.088 ***** 2026-02-05 00:56:47.993346 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.993352 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.993359 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.993365 | orchestrator | 2026-02-05 00:56:47.993371 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 00:56:47.993384 | orchestrator | Thursday 05 February 2026 00:54:50 +0000 (0:00:00.590) 0:08:52.679 ***** 2026-02-05 00:56:47.993391 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.993397 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.993404 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.993410 | orchestrator | 2026-02-05 00:56:47.993416 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 00:56:47.993423 | orchestrator | Thursday 05 February 2026 00:54:50 +0000 (0:00:00.522) 0:08:53.201 ***** 2026-02-05 00:56:47.993429 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.993436 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.993442 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.993448 | orchestrator | 2026-02-05 00:56:47.993454 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 00:56:47.993460 | orchestrator | Thursday 05 February 2026 00:54:51 +0000 (0:00:00.714) 0:08:53.916 ***** 2026-02-05 00:56:47.993467 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.993472 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.993478 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.993484 | orchestrator | 2026-02-05 00:56:47.993490 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 00:56:47.993497 | orchestrator | Thursday 05 February 2026 00:54:52 +0000 (0:00:00.709) 0:08:54.626 ***** 2026-02-05 00:56:47.993504 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.993511 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.993518 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.993524 | orchestrator | 2026-02-05 00:56:47.993531 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 00:56:47.993542 | orchestrator | Thursday 05 February 2026 00:54:52 +0000 (0:00:00.775) 0:08:55.401 ***** 2026-02-05 00:56:47.993547 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.993553 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.993559 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.993564 | orchestrator | 2026-02-05 00:56:47.993570 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 00:56:47.993575 | orchestrator | Thursday 05 February 2026 00:54:53 +0000 (0:00:00.324) 0:08:55.726 ***** 2026-02-05 00:56:47.993581 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.993592 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.993598 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.993604 | orchestrator | 2026-02-05 00:56:47.993610 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 00:56:47.993616 | orchestrator | Thursday 05 February 2026 00:54:53 +0000 (0:00:00.308) 0:08:56.034 ***** 2026-02-05 00:56:47.993622 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.993628 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.993635 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.993640 | orchestrator | 2026-02-05 00:56:47.993646 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 00:56:47.993652 | orchestrator | Thursday 05 February 2026 00:54:53 +0000 (0:00:00.294) 0:08:56.329 ***** 2026-02-05 00:56:47.993696 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.993702 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.993709 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.993715 | orchestrator | 2026-02-05 00:56:47.993721 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 00:56:47.993728 | orchestrator | Thursday 05 February 2026 00:54:54 +0000 (0:00:00.770) 0:08:57.099 ***** 2026-02-05 00:56:47.993734 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.993741 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.993748 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.993754 | orchestrator | 2026-02-05 00:56:47.993761 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-05 00:56:47.993768 | orchestrator | Thursday 05 February 2026 00:54:55 +0000 (0:00:00.785) 0:08:57.885 ***** 2026-02-05 00:56:47.993774 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.993780 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.993786 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-05 00:56:47.993793 | orchestrator | 2026-02-05 00:56:47.993799 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-05 00:56:47.993805 | orchestrator | Thursday 05 February 2026 00:54:56 +0000 (0:00:00.684) 0:08:58.570 ***** 2026-02-05 00:56:47.993812 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:56:47.993819 | orchestrator | 2026-02-05 00:56:47.993825 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-05 00:56:47.993831 | orchestrator | Thursday 05 February 2026 00:54:58 +0000 (0:00:02.515) 0:09:01.085 ***** 2026-02-05 00:56:47.993839 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-05 00:56:47.993847 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.993853 | orchestrator | 2026-02-05 00:56:47.993860 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-05 00:56:47.993866 | orchestrator | Thursday 05 February 2026 00:54:58 +0000 (0:00:00.347) 0:09:01.432 ***** 2026-02-05 00:56:47.993873 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 00:56:47.993885 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 00:56:47.993893 | orchestrator | 2026-02-05 00:56:47.993908 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-05 00:56:47.993916 | orchestrator | Thursday 05 February 2026 00:55:06 +0000 (0:00:07.984) 0:09:09.417 ***** 2026-02-05 00:56:47.993923 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-05 00:56:47.993935 | orchestrator | 2026-02-05 00:56:47.993941 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-05 00:56:47.993946 | orchestrator | Thursday 05 February 2026 00:55:10 +0000 (0:00:03.745) 0:09:13.163 ***** 2026-02-05 00:56:47.993952 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.993958 | orchestrator | 2026-02-05 00:56:47.993964 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-05 00:56:47.993971 | orchestrator | Thursday 05 February 2026 00:55:11 +0000 (0:00:00.504) 0:09:13.667 ***** 2026-02-05 00:56:47.993977 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-05 00:56:47.993983 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-05 00:56:47.993990 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-05 00:56:47.993997 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-05 00:56:47.994003 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-05 00:56:47.994009 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-05 00:56:47.994045 | orchestrator | 2026-02-05 00:56:47.994056 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-05 00:56:47.994062 | orchestrator | Thursday 05 February 2026 00:55:12 +0000 (0:00:01.485) 0:09:15.153 ***** 2026-02-05 00:56:47.994068 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:56:47.994074 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 00:56:47.994081 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 00:56:47.994086 | orchestrator | 2026-02-05 00:56:47.994093 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-05 00:56:47.994099 | orchestrator | Thursday 05 February 2026 00:55:14 +0000 (0:00:02.184) 0:09:17.337 ***** 2026-02-05 00:56:47.994105 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 00:56:47.994111 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 00:56:47.994118 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-05 00:56:47.994124 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.994130 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 00:56:47.994136 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.994143 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 00:56:47.994149 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-05 00:56:47.994156 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.994162 | orchestrator | 2026-02-05 00:56:47.994168 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-05 00:56:47.994174 | orchestrator | Thursday 05 February 2026 00:55:16 +0000 (0:00:01.233) 0:09:18.571 ***** 2026-02-05 00:56:47.994180 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.994185 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.994191 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.994197 | orchestrator | 2026-02-05 00:56:47.994204 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-05 00:56:47.994211 | orchestrator | Thursday 05 February 2026 00:55:18 +0000 (0:00:02.807) 0:09:21.378 ***** 2026-02-05 00:56:47.994217 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.994224 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.994231 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.994237 | orchestrator | 2026-02-05 00:56:47.994244 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-05 00:56:47.994250 | orchestrator | Thursday 05 February 2026 00:55:19 +0000 (0:00:00.295) 0:09:21.674 ***** 2026-02-05 00:56:47.994256 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.994271 | orchestrator | 2026-02-05 00:56:47.994277 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-05 00:56:47.994282 | orchestrator | Thursday 05 February 2026 00:55:19 +0000 (0:00:00.780) 0:09:22.455 ***** 2026-02-05 00:56:47.994288 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.994294 | orchestrator | 2026-02-05 00:56:47.994300 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-05 00:56:47.994306 | orchestrator | Thursday 05 February 2026 00:55:20 +0000 (0:00:00.521) 0:09:22.976 ***** 2026-02-05 00:56:47.994313 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.994319 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.994325 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.994332 | orchestrator | 2026-02-05 00:56:47.994338 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-05 00:56:47.994344 | orchestrator | Thursday 05 February 2026 00:55:22 +0000 (0:00:01.624) 0:09:24.600 ***** 2026-02-05 00:56:47.994350 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.994356 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.994362 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.994368 | orchestrator | 2026-02-05 00:56:47.994374 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-05 00:56:47.994379 | orchestrator | Thursday 05 February 2026 00:55:23 +0000 (0:00:01.197) 0:09:25.798 ***** 2026-02-05 00:56:47.994385 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.994391 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.994398 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.994404 | orchestrator | 2026-02-05 00:56:47.994411 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-05 00:56:47.994424 | orchestrator | Thursday 05 February 2026 00:55:24 +0000 (0:00:01.610) 0:09:27.408 ***** 2026-02-05 00:56:47.994431 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.994438 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.994444 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.994451 | orchestrator | 2026-02-05 00:56:47.994457 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-05 00:56:47.994464 | orchestrator | Thursday 05 February 2026 00:55:26 +0000 (0:00:01.966) 0:09:29.375 ***** 2026-02-05 00:56:47.994470 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.994476 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.994483 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.994489 | orchestrator | 2026-02-05 00:56:47.994495 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-05 00:56:47.994501 | orchestrator | Thursday 05 February 2026 00:55:28 +0000 (0:00:01.534) 0:09:30.909 ***** 2026-02-05 00:56:47.994507 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.994514 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.994520 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.994527 | orchestrator | 2026-02-05 00:56:47.994534 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-05 00:56:47.994540 | orchestrator | Thursday 05 February 2026 00:55:29 +0000 (0:00:00.654) 0:09:31.564 ***** 2026-02-05 00:56:47.994547 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.994554 | orchestrator | 2026-02-05 00:56:47.994561 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-05 00:56:47.994570 | orchestrator | Thursday 05 February 2026 00:55:29 +0000 (0:00:00.623) 0:09:32.187 ***** 2026-02-05 00:56:47.994576 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.994582 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.994588 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.994594 | orchestrator | 2026-02-05 00:56:47.994600 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-05 00:56:47.994607 | orchestrator | Thursday 05 February 2026 00:55:29 +0000 (0:00:00.279) 0:09:32.467 ***** 2026-02-05 00:56:47.994620 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.994626 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.994632 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.994639 | orchestrator | 2026-02-05 00:56:47.994645 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-05 00:56:47.994652 | orchestrator | Thursday 05 February 2026 00:55:31 +0000 (0:00:01.185) 0:09:33.653 ***** 2026-02-05 00:56:47.994674 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:56:47.994680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:56:47.994686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:56:47.994693 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.994699 | orchestrator | 2026-02-05 00:56:47.994705 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-05 00:56:47.994711 | orchestrator | Thursday 05 February 2026 00:55:31 +0000 (0:00:00.791) 0:09:34.444 ***** 2026-02-05 00:56:47.994717 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.994723 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.994730 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.994737 | orchestrator | 2026-02-05 00:56:47.994744 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-05 00:56:47.994751 | orchestrator | 2026-02-05 00:56:47.994757 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-05 00:56:47.994764 | orchestrator | Thursday 05 February 2026 00:55:32 +0000 (0:00:00.702) 0:09:35.147 ***** 2026-02-05 00:56:47.994771 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.994777 | orchestrator | 2026-02-05 00:56:47.994783 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-05 00:56:47.994789 | orchestrator | Thursday 05 February 2026 00:55:33 +0000 (0:00:00.448) 0:09:35.595 ***** 2026-02-05 00:56:47.994795 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.994801 | orchestrator | 2026-02-05 00:56:47.994808 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-05 00:56:47.994814 | orchestrator | Thursday 05 February 2026 00:55:34 +0000 (0:00:00.909) 0:09:36.505 ***** 2026-02-05 00:56:47.994820 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.994826 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.994832 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.994839 | orchestrator | 2026-02-05 00:56:47.994845 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-05 00:56:47.994850 | orchestrator | Thursday 05 February 2026 00:55:34 +0000 (0:00:00.351) 0:09:36.856 ***** 2026-02-05 00:56:47.994857 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.994863 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.994870 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.994876 | orchestrator | 2026-02-05 00:56:47.994882 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-05 00:56:47.994889 | orchestrator | Thursday 05 February 2026 00:55:35 +0000 (0:00:00.775) 0:09:37.632 ***** 2026-02-05 00:56:47.994895 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.994902 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.994907 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.994913 | orchestrator | 2026-02-05 00:56:47.994919 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-05 00:56:47.994926 | orchestrator | Thursday 05 February 2026 00:55:36 +0000 (0:00:01.200) 0:09:38.832 ***** 2026-02-05 00:56:47.994932 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.994938 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.994944 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.994950 | orchestrator | 2026-02-05 00:56:47.994956 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-05 00:56:47.994968 | orchestrator | Thursday 05 February 2026 00:55:37 +0000 (0:00:00.799) 0:09:39.632 ***** 2026-02-05 00:56:47.994974 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.994988 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.994994 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.995000 | orchestrator | 2026-02-05 00:56:47.995006 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-05 00:56:47.995012 | orchestrator | Thursday 05 February 2026 00:55:37 +0000 (0:00:00.317) 0:09:39.950 ***** 2026-02-05 00:56:47.995018 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.995025 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.995030 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.995036 | orchestrator | 2026-02-05 00:56:47.995043 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-05 00:56:47.995049 | orchestrator | Thursday 05 February 2026 00:55:37 +0000 (0:00:00.299) 0:09:40.249 ***** 2026-02-05 00:56:47.995055 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.995060 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.995065 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.995071 | orchestrator | 2026-02-05 00:56:47.995077 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-05 00:56:47.995083 | orchestrator | Thursday 05 February 2026 00:55:38 +0000 (0:00:00.738) 0:09:40.988 ***** 2026-02-05 00:56:47.995089 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.995095 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.995100 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.995106 | orchestrator | 2026-02-05 00:56:47.995112 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-05 00:56:47.995117 | orchestrator | Thursday 05 February 2026 00:55:39 +0000 (0:00:00.910) 0:09:41.898 ***** 2026-02-05 00:56:47.995123 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.995128 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.995138 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.995144 | orchestrator | 2026-02-05 00:56:47.995151 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-05 00:56:47.995157 | orchestrator | Thursday 05 February 2026 00:55:40 +0000 (0:00:00.791) 0:09:42.690 ***** 2026-02-05 00:56:47.995162 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.995168 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.995174 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.995180 | orchestrator | 2026-02-05 00:56:47.995186 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-05 00:56:47.995192 | orchestrator | Thursday 05 February 2026 00:55:40 +0000 (0:00:00.333) 0:09:43.024 ***** 2026-02-05 00:56:47.995198 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.995203 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.995210 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.995215 | orchestrator | 2026-02-05 00:56:47.995221 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-05 00:56:47.995227 | orchestrator | Thursday 05 February 2026 00:55:41 +0000 (0:00:00.636) 0:09:43.660 ***** 2026-02-05 00:56:47.995233 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.995240 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.995245 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.995252 | orchestrator | 2026-02-05 00:56:47.995257 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-05 00:56:47.995264 | orchestrator | Thursday 05 February 2026 00:55:41 +0000 (0:00:00.353) 0:09:44.014 ***** 2026-02-05 00:56:47.995271 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.995277 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.995283 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.995288 | orchestrator | 2026-02-05 00:56:47.995294 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-05 00:56:47.995299 | orchestrator | Thursday 05 February 2026 00:55:41 +0000 (0:00:00.465) 0:09:44.480 ***** 2026-02-05 00:56:47.995310 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.995315 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.995321 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.995326 | orchestrator | 2026-02-05 00:56:47.995331 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-05 00:56:47.995336 | orchestrator | Thursday 05 February 2026 00:55:42 +0000 (0:00:00.432) 0:09:44.912 ***** 2026-02-05 00:56:47.995342 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.995348 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.995354 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.995360 | orchestrator | 2026-02-05 00:56:47.995365 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-05 00:56:47.995371 | orchestrator | Thursday 05 February 2026 00:55:43 +0000 (0:00:00.766) 0:09:45.679 ***** 2026-02-05 00:56:47.995377 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.995383 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.995389 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.995394 | orchestrator | 2026-02-05 00:56:47.995400 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-05 00:56:47.995406 | orchestrator | Thursday 05 February 2026 00:55:43 +0000 (0:00:00.310) 0:09:45.990 ***** 2026-02-05 00:56:47.995412 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.995418 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.995424 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.995429 | orchestrator | 2026-02-05 00:56:47.995435 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-05 00:56:47.995441 | orchestrator | Thursday 05 February 2026 00:55:43 +0000 (0:00:00.334) 0:09:46.324 ***** 2026-02-05 00:56:47.995446 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.995452 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.995458 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.995464 | orchestrator | 2026-02-05 00:56:47.995470 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-05 00:56:47.995476 | orchestrator | Thursday 05 February 2026 00:55:44 +0000 (0:00:00.335) 0:09:46.660 ***** 2026-02-05 00:56:47.995481 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.995486 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.995494 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.995500 | orchestrator | 2026-02-05 00:56:47.995507 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-05 00:56:47.995513 | orchestrator | Thursday 05 February 2026 00:55:45 +0000 (0:00:00.823) 0:09:47.484 ***** 2026-02-05 00:56:47.995527 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.995534 | orchestrator | 2026-02-05 00:56:47.995541 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-05 00:56:47.995547 | orchestrator | Thursday 05 February 2026 00:55:45 +0000 (0:00:00.537) 0:09:48.022 ***** 2026-02-05 00:56:47.995554 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:56:47.995560 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 00:56:47.995567 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 00:56:47.995573 | orchestrator | 2026-02-05 00:56:47.995579 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-05 00:56:47.995585 | orchestrator | Thursday 05 February 2026 00:55:48 +0000 (0:00:02.704) 0:09:50.726 ***** 2026-02-05 00:56:47.995591 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 00:56:47.995597 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-05 00:56:47.995603 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.995609 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 00:56:47.995614 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-05 00:56:47.995620 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.995632 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 00:56:47.995637 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-05 00:56:47.995643 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.995648 | orchestrator | 2026-02-05 00:56:47.995688 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-05 00:56:47.995701 | orchestrator | Thursday 05 February 2026 00:55:49 +0000 (0:00:01.612) 0:09:52.339 ***** 2026-02-05 00:56:47.995708 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.995713 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.995719 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.995725 | orchestrator | 2026-02-05 00:56:47.995731 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-05 00:56:47.995737 | orchestrator | Thursday 05 February 2026 00:55:50 +0000 (0:00:00.309) 0:09:52.648 ***** 2026-02-05 00:56:47.995743 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.995749 | orchestrator | 2026-02-05 00:56:47.995755 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-05 00:56:47.995761 | orchestrator | Thursday 05 February 2026 00:55:50 +0000 (0:00:00.510) 0:09:53.159 ***** 2026-02-05 00:56:47.995767 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 00:56:47.995775 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 00:56:47.995781 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 00:56:47.995787 | orchestrator | 2026-02-05 00:56:47.995793 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-05 00:56:47.995799 | orchestrator | Thursday 05 February 2026 00:55:51 +0000 (0:00:01.100) 0:09:54.260 ***** 2026-02-05 00:56:47.995805 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:56:47.995811 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-05 00:56:47.995817 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:56:47.995824 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-05 00:56:47.995830 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:56:47.995835 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-05 00:56:47.995841 | orchestrator | 2026-02-05 00:56:47.995846 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-05 00:56:47.995851 | orchestrator | Thursday 05 February 2026 00:55:56 +0000 (0:00:05.118) 0:09:59.379 ***** 2026-02-05 00:56:47.995856 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:56:47.995862 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 00:56:47.995868 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:56:47.995874 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 00:56:47.995880 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:56:47.995886 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 00:56:47.995892 | orchestrator | 2026-02-05 00:56:47.995898 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-05 00:56:47.995904 | orchestrator | Thursday 05 February 2026 00:56:00 +0000 (0:00:03.205) 0:10:02.584 ***** 2026-02-05 00:56:47.995967 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 00:56:47.995977 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.995983 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 00:56:47.995989 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.995995 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 00:56:47.996001 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.996007 | orchestrator | 2026-02-05 00:56:47.996021 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-05 00:56:47.996028 | orchestrator | Thursday 05 February 2026 00:56:01 +0000 (0:00:01.180) 0:10:03.764 ***** 2026-02-05 00:56:47.996034 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-05 00:56:47.996040 | orchestrator | 2026-02-05 00:56:47.996046 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-05 00:56:47.996052 | orchestrator | Thursday 05 February 2026 00:56:01 +0000 (0:00:00.373) 0:10:04.138 ***** 2026-02-05 00:56:47.996059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:56:47.996066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:56:47.996072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:56:47.996079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:56:47.996085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:56:47.996095 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.996101 | orchestrator | 2026-02-05 00:56:47.996108 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-05 00:56:47.996114 | orchestrator | Thursday 05 February 2026 00:56:02 +0000 (0:00:00.524) 0:10:04.662 ***** 2026-02-05 00:56:47.996120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:56:47.996126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:56:47.996133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:56:47.996139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:56:47.996146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-05 00:56:47.996152 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.996158 | orchestrator | 2026-02-05 00:56:47.996164 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-05 00:56:47.996171 | orchestrator | Thursday 05 February 2026 00:56:02 +0000 (0:00:00.555) 0:10:05.218 ***** 2026-02-05 00:56:47.996177 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 00:56:47.996185 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 00:56:47.996191 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 00:56:47.996197 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 00:56:47.996210 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-05 00:56:47.996216 | orchestrator | 2026-02-05 00:56:47.996223 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-05 00:56:47.996229 | orchestrator | Thursday 05 February 2026 00:56:34 +0000 (0:00:31.396) 0:10:36.615 ***** 2026-02-05 00:56:47.996235 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.996242 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.996248 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.996255 | orchestrator | 2026-02-05 00:56:47.996261 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-05 00:56:47.996267 | orchestrator | Thursday 05 February 2026 00:56:34 +0000 (0:00:00.300) 0:10:36.915 ***** 2026-02-05 00:56:47.996274 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.996280 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.996287 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.996293 | orchestrator | 2026-02-05 00:56:47.996300 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-05 00:56:47.996306 | orchestrator | Thursday 05 February 2026 00:56:34 +0000 (0:00:00.289) 0:10:37.205 ***** 2026-02-05 00:56:47.996313 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.996320 | orchestrator | 2026-02-05 00:56:47.996327 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-05 00:56:47.996333 | orchestrator | Thursday 05 February 2026 00:56:35 +0000 (0:00:00.789) 0:10:37.994 ***** 2026-02-05 00:56:47.996345 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.996352 | orchestrator | 2026-02-05 00:56:47.996358 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-05 00:56:47.996365 | orchestrator | Thursday 05 February 2026 00:56:36 +0000 (0:00:00.501) 0:10:38.496 ***** 2026-02-05 00:56:47.996371 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.996378 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.996384 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.996390 | orchestrator | 2026-02-05 00:56:47.996396 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-05 00:56:47.996401 | orchestrator | Thursday 05 February 2026 00:56:37 +0000 (0:00:01.429) 0:10:39.925 ***** 2026-02-05 00:56:47.996407 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.996412 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.996418 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.996424 | orchestrator | 2026-02-05 00:56:47.996430 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-05 00:56:47.996435 | orchestrator | Thursday 05 February 2026 00:56:38 +0000 (0:00:01.051) 0:10:40.977 ***** 2026-02-05 00:56:47.996441 | orchestrator | changed: [testbed-node-4] 2026-02-05 00:56:47.996447 | orchestrator | changed: [testbed-node-3] 2026-02-05 00:56:47.996453 | orchestrator | changed: [testbed-node-5] 2026-02-05 00:56:47.996460 | orchestrator | 2026-02-05 00:56:47.996466 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-05 00:56:47.996472 | orchestrator | Thursday 05 February 2026 00:56:40 +0000 (0:00:01.718) 0:10:42.695 ***** 2026-02-05 00:56:47.996483 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-05 00:56:47.996489 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-05 00:56:47.996496 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-05 00:56:47.996508 | orchestrator | 2026-02-05 00:56:47.996515 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-05 00:56:47.996521 | orchestrator | Thursday 05 February 2026 00:56:42 +0000 (0:00:02.770) 0:10:45.466 ***** 2026-02-05 00:56:47.996527 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.996534 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.996540 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.996547 | orchestrator | 2026-02-05 00:56:47.996553 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-05 00:56:47.996559 | orchestrator | Thursday 05 February 2026 00:56:43 +0000 (0:00:00.339) 0:10:45.806 ***** 2026-02-05 00:56:47.996565 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:56:47.996571 | orchestrator | 2026-02-05 00:56:47.996577 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-05 00:56:47.996583 | orchestrator | Thursday 05 February 2026 00:56:44 +0000 (0:00:00.941) 0:10:46.747 ***** 2026-02-05 00:56:47.996589 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.996595 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.996601 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.996607 | orchestrator | 2026-02-05 00:56:47.996613 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-05 00:56:47.996619 | orchestrator | Thursday 05 February 2026 00:56:44 +0000 (0:00:00.401) 0:10:47.148 ***** 2026-02-05 00:56:47.996626 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.996632 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:56:47.996638 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:56:47.996644 | orchestrator | 2026-02-05 00:56:47.996650 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-05 00:56:47.996670 | orchestrator | Thursday 05 February 2026 00:56:45 +0000 (0:00:00.339) 0:10:47.488 ***** 2026-02-05 00:56:47.996676 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:56:47.996683 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:56:47.996689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:56:47.996695 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:56:47.996702 | orchestrator | 2026-02-05 00:56:47.996708 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-05 00:56:47.996715 | orchestrator | Thursday 05 February 2026 00:56:46 +0000 (0:00:01.009) 0:10:48.498 ***** 2026-02-05 00:56:47.996721 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:56:47.996727 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:56:47.996734 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:56:47.996740 | orchestrator | 2026-02-05 00:56:47.996746 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:56:47.996752 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-05 00:56:47.996759 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-05 00:56:47.996765 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-05 00:56:47.996772 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-05 00:56:47.996779 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-05 00:56:47.996792 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-05 00:56:47.996799 | orchestrator | 2026-02-05 00:56:47.996811 | orchestrator | 2026-02-05 00:56:47.996817 | orchestrator | 2026-02-05 00:56:47.996823 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:56:47.996829 | orchestrator | Thursday 05 February 2026 00:56:46 +0000 (0:00:00.299) 0:10:48.797 ***** 2026-02-05 00:56:47.996835 | orchestrator | =============================================================================== 2026-02-05 00:56:47.996840 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.51s 2026-02-05 00:56:47.996846 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 40.27s 2026-02-05 00:56:47.996851 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 37.16s 2026-02-05 00:56:47.996856 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.40s 2026-02-05 00:56:47.996862 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.70s 2026-02-05 00:56:47.996868 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.66s 2026-02-05 00:56:47.996875 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.34s 2026-02-05 00:56:47.996881 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.38s 2026-02-05 00:56:47.996890 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.77s 2026-02-05 00:56:47.996896 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.98s 2026-02-05 00:56:47.996902 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.77s 2026-02-05 00:56:47.996908 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.43s 2026-02-05 00:56:47.996913 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.12s 2026-02-05 00:56:47.996919 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.80s 2026-02-05 00:56:47.996925 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.24s 2026-02-05 00:56:47.996931 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.75s 2026-02-05 00:56:47.996937 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.74s 2026-02-05 00:56:47.996943 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.48s 2026-02-05 00:56:47.996949 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.38s 2026-02-05 00:56:47.996955 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.35s 2026-02-05 00:56:47.996961 | orchestrator | 2026-02-05 00:56:47 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:47.996967 | orchestrator | 2026-02-05 00:56:47 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:56:47.996973 | orchestrator | 2026-02-05 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:51.019736 | orchestrator | 2026-02-05 00:56:51 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:51.021977 | orchestrator | 2026-02-05 00:56:51 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:51.024233 | orchestrator | 2026-02-05 00:56:51 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:56:51.024282 | orchestrator | 2026-02-05 00:56:51 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:54.078198 | orchestrator | 2026-02-05 00:56:54 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:54.080185 | orchestrator | 2026-02-05 00:56:54 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:54.082501 | orchestrator | 2026-02-05 00:56:54 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:56:54.082582 | orchestrator | 2026-02-05 00:56:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:56:57.138558 | orchestrator | 2026-02-05 00:56:57 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:56:57.141406 | orchestrator | 2026-02-05 00:56:57 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state STARTED 2026-02-05 00:56:57.144296 | orchestrator | 2026-02-05 00:56:57 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:56:57.144990 | orchestrator | 2026-02-05 00:56:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:00.195214 | orchestrator | 2026-02-05 00:57:00 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:57:00.197501 | orchestrator | 2026-02-05 00:57:00 | INFO  | Task ad570194-115f-4605-8f9a-cbdc0425a68a is in state SUCCESS 2026-02-05 00:57:00.199191 | orchestrator | 2026-02-05 00:57:00.199247 | orchestrator | 2026-02-05 00:57:00.199256 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:57:00.199263 | orchestrator | 2026-02-05 00:57:00.199269 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:57:00.199276 | orchestrator | Thursday 05 February 2026 00:54:21 +0000 (0:00:00.218) 0:00:00.218 ***** 2026-02-05 00:57:00.199282 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:00.199290 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:00.199295 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:00.199334 | orchestrator | 2026-02-05 00:57:00.199342 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:57:00.199349 | orchestrator | Thursday 05 February 2026 00:54:21 +0000 (0:00:00.241) 0:00:00.459 ***** 2026-02-05 00:57:00.199355 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-05 00:57:00.199362 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-05 00:57:00.199368 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-05 00:57:00.199374 | orchestrator | 2026-02-05 00:57:00.199380 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-05 00:57:00.199385 | orchestrator | 2026-02-05 00:57:00.199392 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-05 00:57:00.199398 | orchestrator | Thursday 05 February 2026 00:54:22 +0000 (0:00:00.374) 0:00:00.833 ***** 2026-02-05 00:57:00.199405 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:57:00.199411 | orchestrator | 2026-02-05 00:57:00.199417 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-05 00:57:00.199424 | orchestrator | Thursday 05 February 2026 00:54:22 +0000 (0:00:00.370) 0:00:01.204 ***** 2026-02-05 00:57:00.199469 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 00:57:00.199479 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 00:57:00.199485 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-05 00:57:00.199491 | orchestrator | 2026-02-05 00:57:00.199495 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-05 00:57:00.199499 | orchestrator | Thursday 05 February 2026 00:54:23 +0000 (0:00:00.730) 0:00:01.934 ***** 2026-02-05 00:57:00.199506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:57:00.199530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:57:00.199546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:57:00.199552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:57:00.199561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:57:00.199569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:57:00.199574 | orchestrator | 2026-02-05 00:57:00.199578 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-05 00:57:00.199582 | orchestrator | Thursday 05 February 2026 00:54:24 +0000 (0:00:01.545) 0:00:03.479 ***** 2026-02-05 00:57:00.199586 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:57:00.199590 | orchestrator | 2026-02-05 00:57:00.199594 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-05 00:57:00.199597 | orchestrator | Thursday 05 February 2026 00:54:25 +0000 (0:00:00.553) 0:00:04.033 ***** 2026-02-05 00:57:00.199606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:57:00.199610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:57:00.199617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:57:00.199625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:57:00.199634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:57:00.199638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:57:00.199642 | orchestrator | 2026-02-05 00:57:00.199646 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-05 00:57:00.199754 | orchestrator | Thursday 05 February 2026 00:54:27 +0000 (0:00:02.466) 0:00:06.499 ***** 2026-02-05 00:57:00.199775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 00:57:00.199789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 00:57:00.199796 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:00.199804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 00:57:00.199820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 00:57:00.199827 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:00.199854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 00:57:00.199867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 00:57:00.199874 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:00.199880 | orchestrator | 2026-02-05 00:57:00.199887 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-05 00:57:00.199893 | orchestrator | Thursday 05 February 2026 00:54:29 +0000 (0:00:01.224) 0:00:07.724 ***** 2026-02-05 00:57:00.199899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 00:57:00.199912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 00:57:00.199917 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:00.199925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 00:57:00.199933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 00:57:00.199938 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:00.199943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-05 00:57:00.199953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-05 00:57:00.199958 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:00.199963 | orchestrator | 2026-02-05 00:57:00.199967 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-05 00:57:00.199972 | orchestrator | Thursday 05 February 2026 00:54:30 +0000 (0:00:01.083) 0:00:08.807 ***** 2026-02-05 00:57:00.199979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:57:00.199988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:57:00.199995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:57:00.200006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:57:00.200020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:57:00.200032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:57:00.200038 | orchestrator | 2026-02-05 00:57:00.200045 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-05 00:57:00.200051 | orchestrator | Thursday 05 February 2026 00:54:32 +0000 (0:00:02.543) 0:00:11.351 ***** 2026-02-05 00:57:00.200058 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:57:00.200065 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:00.200072 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:57:00.200078 | orchestrator | 2026-02-05 00:57:00.200084 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-05 00:57:00.200091 | orchestrator | Thursday 05 February 2026 00:54:35 +0000 (0:00:03.013) 0:00:14.364 ***** 2026-02-05 00:57:00.200097 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:00.200102 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:57:00.200107 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:57:00.200111 | orchestrator | 2026-02-05 00:57:00.200116 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-05 00:57:00.200120 | orchestrator | Thursday 05 February 2026 00:54:37 +0000 (0:00:01.817) 0:00:16.181 ***** 2026-02-05 00:57:00.200125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:57:00.200263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:57:00.200281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-05 00:57:00.200285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:57:00.200290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:57:00.200298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-05 00:57:00.200306 | orchestrator | 2026-02-05 00:57:00.200311 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-05 00:57:00.200318 | orchestrator | Thursday 05 February 2026 00:54:39 +0000 (0:00:02.221) 0:00:18.402 ***** 2026-02-05 00:57:00.200324 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:00.200329 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:00.200336 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:00.200342 | orchestrator | 2026-02-05 00:57:00.200348 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-05 00:57:00.200354 | orchestrator | Thursday 05 February 2026 00:54:40 +0000 (0:00:00.322) 0:00:18.725 ***** 2026-02-05 00:57:00.200359 | orchestrator | 2026-02-05 00:57:00.200365 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-05 00:57:00.200373 | orchestrator | Thursday 05 February 2026 00:54:40 +0000 (0:00:00.070) 0:00:18.795 ***** 2026-02-05 00:57:00.200379 | orchestrator | 2026-02-05 00:57:00.200388 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-05 00:57:00.200395 | orchestrator | Thursday 05 February 2026 00:54:40 +0000 (0:00:00.073) 0:00:18.869 ***** 2026-02-05 00:57:00.200402 | orchestrator | 2026-02-05 00:57:00.200408 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-05 00:57:00.200414 | orchestrator | Thursday 05 February 2026 00:54:40 +0000 (0:00:00.067) 0:00:18.936 ***** 2026-02-05 00:57:00.200420 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:00.200426 | orchestrator | 2026-02-05 00:57:00.200432 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-05 00:57:00.200437 | orchestrator | Thursday 05 February 2026 00:54:40 +0000 (0:00:00.213) 0:00:19.150 ***** 2026-02-05 00:57:00.200444 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:00.200449 | orchestrator | 2026-02-05 00:57:00.200455 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-05 00:57:00.200461 | orchestrator | Thursday 05 February 2026 00:54:41 +0000 (0:00:00.564) 0:00:19.714 ***** 2026-02-05 00:57:00.200468 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:00.200476 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:57:00.200485 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:57:00.200491 | orchestrator | 2026-02-05 00:57:00.200498 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-05 00:57:00.200504 | orchestrator | Thursday 05 February 2026 00:55:34 +0000 (0:00:53.630) 0:01:13.345 ***** 2026-02-05 00:57:00.200510 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:00.200516 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:57:00.200559 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:57:00.200567 | orchestrator | 2026-02-05 00:57:00.200573 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-05 00:57:00.200579 | orchestrator | Thursday 05 February 2026 00:56:44 +0000 (0:01:09.276) 0:02:22.621 ***** 2026-02-05 00:57:00.200586 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:57:00.200592 | orchestrator | 2026-02-05 00:57:00.200598 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-05 00:57:00.200604 | orchestrator | Thursday 05 February 2026 00:56:44 +0000 (0:00:00.502) 0:02:23.123 ***** 2026-02-05 00:57:00.200611 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:00.200618 | orchestrator | 2026-02-05 00:57:00.200624 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-02-05 00:57:00.200631 | orchestrator | Thursday 05 February 2026 00:56:47 +0000 (0:00:03.125) 0:02:26.249 ***** 2026-02-05 00:57:00.200644 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:00.200650 | orchestrator | 2026-02-05 00:57:00.200677 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-05 00:57:00.200684 | orchestrator | Thursday 05 February 2026 00:56:50 +0000 (0:00:02.318) 0:02:28.568 ***** 2026-02-05 00:57:00.200690 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:00.200695 | orchestrator | 2026-02-05 00:57:00.200700 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-05 00:57:00.200706 | orchestrator | Thursday 05 February 2026 00:56:52 +0000 (0:00:02.174) 0:02:30.742 ***** 2026-02-05 00:57:00.200712 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:00.200717 | orchestrator | 2026-02-05 00:57:00.200723 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-05 00:57:00.200729 | orchestrator | Thursday 05 February 2026 00:56:55 +0000 (0:00:03.252) 0:02:33.994 ***** 2026-02-05 00:57:00.200735 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:00.200741 | orchestrator | 2026-02-05 00:57:00.200746 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:57:00.200753 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 00:57:00.200761 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 00:57:00.200776 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-05 00:57:00.200783 | orchestrator | 2026-02-05 00:57:00.200789 | orchestrator | 2026-02-05 00:57:00.200795 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:57:00.200802 | orchestrator | Thursday 05 February 2026 00:56:58 +0000 (0:00:03.030) 0:02:37.024 ***** 2026-02-05 00:57:00.200807 | orchestrator | =============================================================================== 2026-02-05 00:57:00.200812 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 69.28s 2026-02-05 00:57:00.200819 | orchestrator | opensearch : Restart opensearch container ------------------------------ 53.63s 2026-02-05 00:57:00.200825 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.25s 2026-02-05 00:57:00.200831 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.13s 2026-02-05 00:57:00.200836 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.03s 2026-02-05 00:57:00.200843 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.01s 2026-02-05 00:57:00.200849 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.54s 2026-02-05 00:57:00.200855 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.47s 2026-02-05 00:57:00.200860 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.32s 2026-02-05 00:57:00.200866 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.22s 2026-02-05 00:57:00.200873 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.17s 2026-02-05 00:57:00.200880 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.82s 2026-02-05 00:57:00.200886 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.55s 2026-02-05 00:57:00.200893 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.23s 2026-02-05 00:57:00.200900 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.08s 2026-02-05 00:57:00.200907 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.73s 2026-02-05 00:57:00.200914 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.56s 2026-02-05 00:57:00.200920 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2026-02-05 00:57:00.200932 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2026-02-05 00:57:00.200940 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.37s 2026-02-05 00:57:00.200946 | orchestrator | 2026-02-05 00:57:00 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:00.200953 | orchestrator | 2026-02-05 00:57:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:03.236097 | orchestrator | 2026-02-05 00:57:03 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:57:03.236452 | orchestrator | 2026-02-05 00:57:03 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:03.236484 | orchestrator | 2026-02-05 00:57:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:06.285237 | orchestrator | 2026-02-05 00:57:06 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:57:06.287583 | orchestrator | 2026-02-05 00:57:06 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:06.287711 | orchestrator | 2026-02-05 00:57:06 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:09.321245 | orchestrator | 2026-02-05 00:57:09 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:57:09.323280 | orchestrator | 2026-02-05 00:57:09 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:09.323350 | orchestrator | 2026-02-05 00:57:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:12.365574 | orchestrator | 2026-02-05 00:57:12 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:57:12.367251 | orchestrator | 2026-02-05 00:57:12 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:12.367379 | orchestrator | 2026-02-05 00:57:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:15.417404 | orchestrator | 2026-02-05 00:57:15 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:57:15.418453 | orchestrator | 2026-02-05 00:57:15 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:15.418493 | orchestrator | 2026-02-05 00:57:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:18.468264 | orchestrator | 2026-02-05 00:57:18 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:57:18.468492 | orchestrator | 2026-02-05 00:57:18 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:18.469461 | orchestrator | 2026-02-05 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:21.513010 | orchestrator | 2026-02-05 00:57:21 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state STARTED 2026-02-05 00:57:21.514606 | orchestrator | 2026-02-05 00:57:21 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:21.514644 | orchestrator | 2026-02-05 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:24.572834 | orchestrator | 2026-02-05 00:57:24 | INFO  | Task fc3721ea-5cbd-4933-b78e-df823ec1fcb8 is in state SUCCESS 2026-02-05 00:57:24.574115 | orchestrator | 2026-02-05 00:57:24.574171 | orchestrator | 2026-02-05 00:57:24.574181 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-02-05 00:57:24.574191 | orchestrator | 2026-02-05 00:57:24.574200 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-05 00:57:24.574207 | orchestrator | Thursday 05 February 2026 00:54:21 +0000 (0:00:00.079) 0:00:00.079 ***** 2026-02-05 00:57:24.574212 | orchestrator | ok: [localhost] => { 2026-02-05 00:57:24.574237 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-02-05 00:57:24.574243 | orchestrator | } 2026-02-05 00:57:24.574248 | orchestrator | 2026-02-05 00:57:24.574253 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-02-05 00:57:24.574258 | orchestrator | Thursday 05 February 2026 00:54:21 +0000 (0:00:00.052) 0:00:00.132 ***** 2026-02-05 00:57:24.574274 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-02-05 00:57:24.574281 | orchestrator | ...ignoring 2026-02-05 00:57:24.574286 | orchestrator | 2026-02-05 00:57:24.574291 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-02-05 00:57:24.574296 | orchestrator | Thursday 05 February 2026 00:54:24 +0000 (0:00:02.818) 0:00:02.951 ***** 2026-02-05 00:57:24.574301 | orchestrator | skipping: [localhost] 2026-02-05 00:57:24.574306 | orchestrator | 2026-02-05 00:57:24.574310 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-02-05 00:57:24.574315 | orchestrator | Thursday 05 February 2026 00:54:24 +0000 (0:00:00.058) 0:00:03.009 ***** 2026-02-05 00:57:24.574320 | orchestrator | ok: [localhost] 2026-02-05 00:57:24.574325 | orchestrator | 2026-02-05 00:57:24.574329 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:57:24.574334 | orchestrator | 2026-02-05 00:57:24.574339 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:57:24.574344 | orchestrator | Thursday 05 February 2026 00:54:24 +0000 (0:00:00.158) 0:00:03.167 ***** 2026-02-05 00:57:24.574348 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:24.574353 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:24.574358 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:24.574363 | orchestrator | 2026-02-05 00:57:24.574367 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:57:24.574372 | orchestrator | Thursday 05 February 2026 00:54:24 +0000 (0:00:00.286) 0:00:03.454 ***** 2026-02-05 00:57:24.574377 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-05 00:57:24.574382 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-05 00:57:24.574387 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-05 00:57:24.574392 | orchestrator | 2026-02-05 00:57:24.574397 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-05 00:57:24.574401 | orchestrator | 2026-02-05 00:57:24.574406 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-05 00:57:24.574411 | orchestrator | Thursday 05 February 2026 00:54:25 +0000 (0:00:00.546) 0:00:04.001 ***** 2026-02-05 00:57:24.574416 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-05 00:57:24.574421 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-05 00:57:24.574425 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-05 00:57:24.574430 | orchestrator | 2026-02-05 00:57:24.574435 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-05 00:57:24.574439 | orchestrator | Thursday 05 February 2026 00:54:25 +0000 (0:00:00.365) 0:00:04.366 ***** 2026-02-05 00:57:24.574444 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:57:24.574450 | orchestrator | 2026-02-05 00:57:24.574458 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-05 00:57:24.574468 | orchestrator | Thursday 05 February 2026 00:54:26 +0000 (0:00:00.463) 0:00:04.829 ***** 2026-02-05 00:57:24.574498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 00:57:24.574521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 00:57:24.574531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 00:57:24.574548 | orchestrator | 2026-02-05 00:57:24.574560 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-05 00:57:24.574567 | orchestrator | Thursday 05 February 2026 00:54:29 +0000 (0:00:03.058) 0:00:07.888 ***** 2026-02-05 00:57:24.574574 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:24.574583 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:24.574590 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:24.574598 | orchestrator | 2026-02-05 00:57:24.574604 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-05 00:57:24.574612 | orchestrator | Thursday 05 February 2026 00:54:30 +0000 (0:00:00.819) 0:00:08.707 ***** 2026-02-05 00:57:24.574619 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:24.574626 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:24.574633 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:24.574639 | orchestrator | 2026-02-05 00:57:24.574647 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-05 00:57:24.574688 | orchestrator | Thursday 05 February 2026 00:54:31 +0000 (0:00:01.491) 0:00:10.199 ***** 2026-02-05 00:57:24.574703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 00:57:24.574718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 00:57:24.574737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 00:57:24.574745 | orchestrator | 2026-02-05 00:57:24.574753 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-05 00:57:24.574760 | orchestrator | Thursday 05 February 2026 00:54:35 +0000 (0:00:04.000) 0:00:14.199 ***** 2026-02-05 00:57:24.574767 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:24.574775 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:24.574783 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:24.574795 | orchestrator | 2026-02-05 00:57:24.574803 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-05 00:57:24.574811 | orchestrator | Thursday 05 February 2026 00:54:36 +0000 (0:00:01.141) 0:00:15.340 ***** 2026-02-05 00:57:24.574818 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:57:24.574826 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:24.574835 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:57:24.574842 | orchestrator | 2026-02-05 00:57:24.574850 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-05 00:57:24.574858 | orchestrator | Thursday 05 February 2026 00:54:41 +0000 (0:00:04.822) 0:00:20.162 ***** 2026-02-05 00:57:24.574866 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:57:24.574874 | orchestrator | 2026-02-05 00:57:24.574883 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-05 00:57:24.574891 | orchestrator | Thursday 05 February 2026 00:54:42 +0000 (0:00:01.057) 0:00:21.219 ***** 2026-02-05 00:57:24.574918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:57:24.574925 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:24.574932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:57:24.574942 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:24.574953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:57:24.574959 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:24.574965 | orchestrator | 2026-02-05 00:57:24.574971 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-05 00:57:24.574977 | orchestrator | Thursday 05 February 2026 00:54:46 +0000 (0:00:03.599) 0:00:24.819 ***** 2026-02-05 00:57:24.574985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:57:24.574995 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:24.575004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:57:24.575011 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:24.575020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:57:24.575030 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:24.575036 | orchestrator | 2026-02-05 00:57:24.575042 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-05 00:57:24.575047 | orchestrator | Thursday 05 February 2026 00:54:50 +0000 (0:00:04.050) 0:00:28.869 ***** 2026-02-05 00:57:24.575052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:57:24.575057 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:24.575070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:57:24.575083 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:24.575096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-05 00:57:24.575106 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:24.575113 | orchestrator | 2026-02-05 00:57:24.575121 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-05 00:57:24.575128 | orchestrator | Thursday 05 February 2026 00:54:53 +0000 (0:00:03.449) 0:00:32.319 ***** 2026-02-05 00:57:24.575353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 00:57:24.575381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 00:57:24.575401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-05 00:57:24.575415 | orchestrator | 2026-02-05 00:57:24.575425 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-05 00:57:24.575430 | orchestrator | Thursday 05 February 2026 00:54:57 +0000 (0:00:04.055) 0:00:36.374 ***** 2026-02-05 00:57:24.575435 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:57:24.575440 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:24.575445 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:57:24.575450 | orchestrator | 2026-02-05 00:57:24.575455 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-05 00:57:24.575460 | orchestrator | Thursday 05 February 2026 00:54:58 +0000 (0:00:00.801) 0:00:37.176 ***** 2026-02-05 00:57:24.575465 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:24.575470 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:24.575475 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:24.575480 | orchestrator | 2026-02-05 00:57:24.575485 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-05 00:57:24.575490 | orchestrator | Thursday 05 February 2026 00:54:59 +0000 (0:00:00.547) 0:00:37.724 ***** 2026-02-05 00:57:24.575495 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:24.575500 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:24.575505 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:24.575509 | orchestrator | 2026-02-05 00:57:24.575514 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-05 00:57:24.575519 | orchestrator | Thursday 05 February 2026 00:54:59 +0000 (0:00:00.319) 0:00:38.044 ***** 2026-02-05 00:57:24.575525 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-05 00:57:24.575531 | orchestrator | ...ignoring 2026-02-05 00:57:24.575536 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-05 00:57:24.575541 | orchestrator | ...ignoring 2026-02-05 00:57:24.575546 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-05 00:57:24.575551 | orchestrator | ...ignoring 2026-02-05 00:57:24.575556 | orchestrator | 2026-02-05 00:57:24.575561 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-05 00:57:24.575566 | orchestrator | Thursday 05 February 2026 00:55:10 +0000 (0:00:10.909) 0:00:48.953 ***** 2026-02-05 00:57:24.575571 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:24.575579 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:24.575586 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:24.575599 | orchestrator | 2026-02-05 00:57:24.575606 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-05 00:57:24.575613 | orchestrator | Thursday 05 February 2026 00:55:10 +0000 (0:00:00.418) 0:00:49.371 ***** 2026-02-05 00:57:24.575621 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:24.575628 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:24.575635 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:24.575643 | orchestrator | 2026-02-05 00:57:24.575650 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-05 00:57:24.575683 | orchestrator | Thursday 05 February 2026 00:55:11 +0000 (0:00:00.602) 0:00:49.973 ***** 2026-02-05 00:57:24.575690 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:24.575698 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:24.575705 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:24.575713 | orchestrator | 2026-02-05 00:57:24.575721 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-05 00:57:24.575729 | orchestrator | Thursday 05 February 2026 00:55:11 +0000 (0:00:00.418) 0:00:50.391 ***** 2026-02-05 00:57:24.575737 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:24.575745 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:24.575753 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:24.575765 | orchestrator | 2026-02-05 00:57:24.575770 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-05 00:57:24.575775 | orchestrator | Thursday 05 February 2026 00:55:12 +0000 (0:00:00.384) 0:00:50.775 ***** 2026-02-05 00:57:24.575780 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:24.575785 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:24.575790 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:24.575794 | orchestrator | 2026-02-05 00:57:24.575799 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-05 00:57:24.575806 | orchestrator | Thursday 05 February 2026 00:55:12 +0000 (0:00:00.355) 0:00:51.131 ***** 2026-02-05 00:57:24.575815 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:24.575821 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:24.575825 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:24.575830 | orchestrator | 2026-02-05 00:57:24.575835 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-05 00:57:24.575840 | orchestrator | Thursday 05 February 2026 00:55:13 +0000 (0:00:00.619) 0:00:51.750 ***** 2026-02-05 00:57:24.575845 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:24.575850 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:24.575855 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-05 00:57:24.575860 | orchestrator | 2026-02-05 00:57:24.575865 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-05 00:57:24.575870 | orchestrator | Thursday 05 February 2026 00:55:13 +0000 (0:00:00.347) 0:00:52.098 ***** 2026-02-05 00:57:24.575875 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:24.575879 | orchestrator | 2026-02-05 00:57:24.575888 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-05 00:57:24.575893 | orchestrator | Thursday 05 February 2026 00:55:23 +0000 (0:00:09.797) 0:01:01.896 ***** 2026-02-05 00:57:24.575898 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:24.575903 | orchestrator | 2026-02-05 00:57:24.575908 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-05 00:57:24.575913 | orchestrator | Thursday 05 February 2026 00:55:23 +0000 (0:00:00.114) 0:01:02.010 ***** 2026-02-05 00:57:24.575918 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:24.575923 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:24.575927 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:24.575932 | orchestrator | 2026-02-05 00:57:24.575937 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-05 00:57:24.575942 | orchestrator | Thursday 05 February 2026 00:55:24 +0000 (0:00:00.962) 0:01:02.973 ***** 2026-02-05 00:57:24.575947 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:24.575952 | orchestrator | 2026-02-05 00:57:24.575957 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-05 00:57:24.575962 | orchestrator | Thursday 05 February 2026 00:55:31 +0000 (0:00:07.227) 0:01:10.200 ***** 2026-02-05 00:57:24.575966 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:24.575971 | orchestrator | 2026-02-05 00:57:24.575976 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-05 00:57:24.575981 | orchestrator | Thursday 05 February 2026 00:55:33 +0000 (0:00:01.655) 0:01:11.856 ***** 2026-02-05 00:57:24.575986 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:24.575991 | orchestrator | 2026-02-05 00:57:24.575996 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-05 00:57:24.576000 | orchestrator | Thursday 05 February 2026 00:55:36 +0000 (0:00:02.779) 0:01:14.635 ***** 2026-02-05 00:57:24.576005 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:24.576010 | orchestrator | 2026-02-05 00:57:24.576015 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-05 00:57:24.576020 | orchestrator | Thursday 05 February 2026 00:55:36 +0000 (0:00:00.189) 0:01:14.824 ***** 2026-02-05 00:57:24.576025 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:24.576030 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:24.576038 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:24.576043 | orchestrator | 2026-02-05 00:57:24.576048 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-05 00:57:24.576053 | orchestrator | Thursday 05 February 2026 00:55:36 +0000 (0:00:00.499) 0:01:15.323 ***** 2026-02-05 00:57:24.576058 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:24.576063 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:57:24.576068 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:57:24.576074 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-05 00:57:24.576081 | orchestrator | 2026-02-05 00:57:24.576089 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-05 00:57:24.576096 | orchestrator | skipping: no hosts matched 2026-02-05 00:57:24.576107 | orchestrator | 2026-02-05 00:57:24.576117 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-05 00:57:24.576124 | orchestrator | 2026-02-05 00:57:24.576132 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-05 00:57:24.576139 | orchestrator | Thursday 05 February 2026 00:55:37 +0000 (0:00:00.662) 0:01:15.986 ***** 2026-02-05 00:57:24.576146 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:57:24.576153 | orchestrator | 2026-02-05 00:57:24.576161 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-05 00:57:24.576168 | orchestrator | Thursday 05 February 2026 00:55:55 +0000 (0:00:17.724) 0:01:33.710 ***** 2026-02-05 00:57:24.576175 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:24.576183 | orchestrator | 2026-02-05 00:57:24.576190 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-05 00:57:24.576198 | orchestrator | Thursday 05 February 2026 00:56:10 +0000 (0:00:15.613) 0:01:49.324 ***** 2026-02-05 00:57:24.576210 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:24.576218 | orchestrator | 2026-02-05 00:57:24.576226 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-05 00:57:24.576233 | orchestrator | 2026-02-05 00:57:24.576240 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-05 00:57:24.576249 | orchestrator | Thursday 05 February 2026 00:56:13 +0000 (0:00:02.319) 0:01:51.643 ***** 2026-02-05 00:57:24.576256 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:57:24.576264 | orchestrator | 2026-02-05 00:57:24.576272 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-05 00:57:24.576299 | orchestrator | Thursday 05 February 2026 00:56:30 +0000 (0:00:17.586) 0:02:09.230 ***** 2026-02-05 00:57:24.576307 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:24.576312 | orchestrator | 2026-02-05 00:57:24.576317 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-05 00:57:24.576322 | orchestrator | Thursday 05 February 2026 00:56:46 +0000 (0:00:15.622) 0:02:24.853 ***** 2026-02-05 00:57:24.576327 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:24.576331 | orchestrator | 2026-02-05 00:57:24.576336 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-05 00:57:24.576341 | orchestrator | 2026-02-05 00:57:24.576351 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-05 00:57:24.576356 | orchestrator | Thursday 05 February 2026 00:56:48 +0000 (0:00:02.510) 0:02:27.363 ***** 2026-02-05 00:57:24.576361 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:24.576366 | orchestrator | 2026-02-05 00:57:24.576371 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-05 00:57:24.576375 | orchestrator | Thursday 05 February 2026 00:57:00 +0000 (0:00:11.627) 0:02:38.991 ***** 2026-02-05 00:57:24.576380 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:24.576385 | orchestrator | 2026-02-05 00:57:24.576390 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-05 00:57:24.576395 | orchestrator | Thursday 05 February 2026 00:57:05 +0000 (0:00:04.604) 0:02:43.595 ***** 2026-02-05 00:57:24.576399 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:24.576413 | orchestrator | 2026-02-05 00:57:24.576418 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-05 00:57:24.576423 | orchestrator | 2026-02-05 00:57:24.576431 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-05 00:57:24.576436 | orchestrator | Thursday 05 February 2026 00:57:07 +0000 (0:00:02.622) 0:02:46.218 ***** 2026-02-05 00:57:24.576441 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:57:24.576446 | orchestrator | 2026-02-05 00:57:24.576451 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-05 00:57:24.576456 | orchestrator | Thursday 05 February 2026 00:57:08 +0000 (0:00:00.450) 0:02:46.669 ***** 2026-02-05 00:57:24.576460 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:24.576465 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:24.576471 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:24.576479 | orchestrator | 2026-02-05 00:57:24.576491 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-05 00:57:24.576499 | orchestrator | Thursday 05 February 2026 00:57:10 +0000 (0:00:02.685) 0:02:49.354 ***** 2026-02-05 00:57:24.576507 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:24.576515 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:24.576522 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:24.576530 | orchestrator | 2026-02-05 00:57:24.576537 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-05 00:57:24.576546 | orchestrator | Thursday 05 February 2026 00:57:13 +0000 (0:00:02.646) 0:02:52.000 ***** 2026-02-05 00:57:24.576554 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:24.576563 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:24.576569 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:24.576573 | orchestrator | 2026-02-05 00:57:24.576578 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-05 00:57:24.576583 | orchestrator | Thursday 05 February 2026 00:57:15 +0000 (0:00:02.259) 0:02:54.260 ***** 2026-02-05 00:57:24.576588 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:24.576593 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:24.576598 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:57:24.576602 | orchestrator | 2026-02-05 00:57:24.576607 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-05 00:57:24.576612 | orchestrator | Thursday 05 February 2026 00:57:18 +0000 (0:00:02.324) 0:02:56.585 ***** 2026-02-05 00:57:24.576617 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:57:24.576622 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:57:24.576627 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:57:24.576631 | orchestrator | 2026-02-05 00:57:24.576636 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-05 00:57:24.576641 | orchestrator | Thursday 05 February 2026 00:57:21 +0000 (0:00:02.949) 0:02:59.534 ***** 2026-02-05 00:57:24.576646 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:57:24.576693 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:57:24.576700 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:57:24.576705 | orchestrator | 2026-02-05 00:57:24.576710 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:57:24.576715 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-05 00:57:24.576721 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-05 00:57:24.576728 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-05 00:57:24.576733 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-05 00:57:24.576743 | orchestrator | 2026-02-05 00:57:24.576748 | orchestrator | 2026-02-05 00:57:24.576753 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:57:24.576758 | orchestrator | Thursday 05 February 2026 00:57:21 +0000 (0:00:00.381) 0:02:59.915 ***** 2026-02-05 00:57:24.576763 | orchestrator | =============================================================================== 2026-02-05 00:57:24.576768 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 35.31s 2026-02-05 00:57:24.576773 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.24s 2026-02-05 00:57:24.576777 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.63s 2026-02-05 00:57:24.576782 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.91s 2026-02-05 00:57:24.576787 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.80s 2026-02-05 00:57:24.576792 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.23s 2026-02-05 00:57:24.576801 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.83s 2026-02-05 00:57:24.576807 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.82s 2026-02-05 00:57:24.576812 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.60s 2026-02-05 00:57:24.576816 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.06s 2026-02-05 00:57:24.576821 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 4.05s 2026-02-05 00:57:24.576829 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.00s 2026-02-05 00:57:24.576838 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.60s 2026-02-05 00:57:24.576845 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.45s 2026-02-05 00:57:24.576853 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.06s 2026-02-05 00:57:24.576865 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.95s 2026-02-05 00:57:24.576873 | orchestrator | Check MariaDB service --------------------------------------------------- 2.82s 2026-02-05 00:57:24.576881 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.78s 2026-02-05 00:57:24.576888 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.69s 2026-02-05 00:57:24.576896 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.65s 2026-02-05 00:57:24.576904 | orchestrator | 2026-02-05 00:57:24 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:57:24.577029 | orchestrator | 2026-02-05 00:57:24 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:57:24.579186 | orchestrator | 2026-02-05 00:57:24 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:24.579276 | orchestrator | 2026-02-05 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:27.604814 | orchestrator | 2026-02-05 00:57:27 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:57:27.604965 | orchestrator | 2026-02-05 00:57:27 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:57:27.605541 | orchestrator | 2026-02-05 00:57:27 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:27.605564 | orchestrator | 2026-02-05 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:30.648893 | orchestrator | 2026-02-05 00:57:30 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:57:30.649893 | orchestrator | 2026-02-05 00:57:30 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:57:30.650741 | orchestrator | 2026-02-05 00:57:30 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:30.650856 | orchestrator | 2026-02-05 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:33.708103 | orchestrator | 2026-02-05 00:57:33 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:57:33.708187 | orchestrator | 2026-02-05 00:57:33 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:57:33.709618 | orchestrator | 2026-02-05 00:57:33 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:33.709719 | orchestrator | 2026-02-05 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:36.743771 | orchestrator | 2026-02-05 00:57:36 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:57:36.745132 | orchestrator | 2026-02-05 00:57:36 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:57:36.747140 | orchestrator | 2026-02-05 00:57:36 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:36.747185 | orchestrator | 2026-02-05 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:39.787387 | orchestrator | 2026-02-05 00:57:39 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:57:39.788097 | orchestrator | 2026-02-05 00:57:39 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:57:39.789054 | orchestrator | 2026-02-05 00:57:39 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:39.789103 | orchestrator | 2026-02-05 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:42.819530 | orchestrator | 2026-02-05 00:57:42 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:57:42.824135 | orchestrator | 2026-02-05 00:57:42 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:57:42.827717 | orchestrator | 2026-02-05 00:57:42 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:42.827776 | orchestrator | 2026-02-05 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:45.874433 | orchestrator | 2026-02-05 00:57:45 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:57:45.875498 | orchestrator | 2026-02-05 00:57:45 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:57:45.877452 | orchestrator | 2026-02-05 00:57:45 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:45.877542 | orchestrator | 2026-02-05 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:48.906891 | orchestrator | 2026-02-05 00:57:48 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:57:48.908059 | orchestrator | 2026-02-05 00:57:48 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:57:48.909106 | orchestrator | 2026-02-05 00:57:48 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:48.909133 | orchestrator | 2026-02-05 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:51.943406 | orchestrator | 2026-02-05 00:57:51 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:57:51.944824 | orchestrator | 2026-02-05 00:57:51 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:57:51.946501 | orchestrator | 2026-02-05 00:57:51 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:51.946689 | orchestrator | 2026-02-05 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:54.995045 | orchestrator | 2026-02-05 00:57:54 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:57:54.995881 | orchestrator | 2026-02-05 00:57:54 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:57:54.996972 | orchestrator | 2026-02-05 00:57:54 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:54.997014 | orchestrator | 2026-02-05 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:57:58.041560 | orchestrator | 2026-02-05 00:57:58 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:57:58.044271 | orchestrator | 2026-02-05 00:57:58 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:57:58.045929 | orchestrator | 2026-02-05 00:57:58 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:57:58.046010 | orchestrator | 2026-02-05 00:57:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:01.091928 | orchestrator | 2026-02-05 00:58:01 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:01.093770 | orchestrator | 2026-02-05 00:58:01 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:01.096496 | orchestrator | 2026-02-05 00:58:01 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:01.096571 | orchestrator | 2026-02-05 00:58:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:04.138718 | orchestrator | 2026-02-05 00:58:04 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:04.140496 | orchestrator | 2026-02-05 00:58:04 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:04.142537 | orchestrator | 2026-02-05 00:58:04 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:04.142615 | orchestrator | 2026-02-05 00:58:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:07.189121 | orchestrator | 2026-02-05 00:58:07 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:07.190376 | orchestrator | 2026-02-05 00:58:07 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:07.191985 | orchestrator | 2026-02-05 00:58:07 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:07.192157 | orchestrator | 2026-02-05 00:58:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:10.240000 | orchestrator | 2026-02-05 00:58:10 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:10.242532 | orchestrator | 2026-02-05 00:58:10 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:10.244569 | orchestrator | 2026-02-05 00:58:10 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:10.244662 | orchestrator | 2026-02-05 00:58:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:13.287829 | orchestrator | 2026-02-05 00:58:13 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:13.290158 | orchestrator | 2026-02-05 00:58:13 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:13.291530 | orchestrator | 2026-02-05 00:58:13 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:13.291576 | orchestrator | 2026-02-05 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:16.320617 | orchestrator | 2026-02-05 00:58:16 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:16.322554 | orchestrator | 2026-02-05 00:58:16 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:16.324611 | orchestrator | 2026-02-05 00:58:16 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:16.324728 | orchestrator | 2026-02-05 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:19.370338 | orchestrator | 2026-02-05 00:58:19 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:19.370550 | orchestrator | 2026-02-05 00:58:19 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:19.373564 | orchestrator | 2026-02-05 00:58:19 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:19.373842 | orchestrator | 2026-02-05 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:22.422225 | orchestrator | 2026-02-05 00:58:22 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:22.423520 | orchestrator | 2026-02-05 00:58:22 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:22.425008 | orchestrator | 2026-02-05 00:58:22 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:22.425078 | orchestrator | 2026-02-05 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:25.466467 | orchestrator | 2026-02-05 00:58:25 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:25.468422 | orchestrator | 2026-02-05 00:58:25 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:25.470709 | orchestrator | 2026-02-05 00:58:25 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:25.470776 | orchestrator | 2026-02-05 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:28.507457 | orchestrator | 2026-02-05 00:58:28 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:28.508591 | orchestrator | 2026-02-05 00:58:28 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:28.509996 | orchestrator | 2026-02-05 00:58:28 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:28.510084 | orchestrator | 2026-02-05 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:31.549308 | orchestrator | 2026-02-05 00:58:31 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:31.550264 | orchestrator | 2026-02-05 00:58:31 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:31.552314 | orchestrator | 2026-02-05 00:58:31 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:31.552378 | orchestrator | 2026-02-05 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:34.591174 | orchestrator | 2026-02-05 00:58:34 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:34.593423 | orchestrator | 2026-02-05 00:58:34 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:34.595465 | orchestrator | 2026-02-05 00:58:34 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:34.595993 | orchestrator | 2026-02-05 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:37.637836 | orchestrator | 2026-02-05 00:58:37 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:37.639963 | orchestrator | 2026-02-05 00:58:37 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:37.642239 | orchestrator | 2026-02-05 00:58:37 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:37.642353 | orchestrator | 2026-02-05 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:40.688086 | orchestrator | 2026-02-05 00:58:40 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:40.689346 | orchestrator | 2026-02-05 00:58:40 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:40.690503 | orchestrator | 2026-02-05 00:58:40 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:40.690553 | orchestrator | 2026-02-05 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:43.745347 | orchestrator | 2026-02-05 00:58:43 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:43.747927 | orchestrator | 2026-02-05 00:58:43 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:43.751761 | orchestrator | 2026-02-05 00:58:43 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:43.751834 | orchestrator | 2026-02-05 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:46.798269 | orchestrator | 2026-02-05 00:58:46 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:46.799859 | orchestrator | 2026-02-05 00:58:46 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:46.802305 | orchestrator | 2026-02-05 00:58:46 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:46.802358 | orchestrator | 2026-02-05 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:49.836413 | orchestrator | 2026-02-05 00:58:49 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:49.838741 | orchestrator | 2026-02-05 00:58:49 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:49.840403 | orchestrator | 2026-02-05 00:58:49 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:49.840536 | orchestrator | 2026-02-05 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:52.888363 | orchestrator | 2026-02-05 00:58:52 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:52.888524 | orchestrator | 2026-02-05 00:58:52 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:52.890086 | orchestrator | 2026-02-05 00:58:52 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:52.890132 | orchestrator | 2026-02-05 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:55.934864 | orchestrator | 2026-02-05 00:58:55 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:55.937180 | orchestrator | 2026-02-05 00:58:55 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state STARTED 2026-02-05 00:58:55.938053 | orchestrator | 2026-02-05 00:58:55 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:55.938163 | orchestrator | 2026-02-05 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:58:58.972868 | orchestrator | 2026-02-05 00:58:58 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:58:58.974244 | orchestrator | 2026-02-05 00:58:58 | INFO  | Task 9685ee4f-e3ab-465c-beb1-6bb47b9cd2b9 is in state SUCCESS 2026-02-05 00:58:58.975189 | orchestrator | 2026-02-05 00:58:58.975230 | orchestrator | 2026-02-05 00:58:58.975238 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 00:58:58.975246 | orchestrator | 2026-02-05 00:58:58.975252 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 00:58:58.975258 | orchestrator | Thursday 05 February 2026 00:57:25 +0000 (0:00:00.228) 0:00:00.228 ***** 2026-02-05 00:58:58.975264 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:58:58.975271 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:58:58.975278 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:58:58.975283 | orchestrator | 2026-02-05 00:58:58.975291 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 00:58:58.975297 | orchestrator | Thursday 05 February 2026 00:57:26 +0000 (0:00:00.321) 0:00:00.550 ***** 2026-02-05 00:58:58.975303 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-05 00:58:58.975310 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-05 00:58:58.975317 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-05 00:58:58.975323 | orchestrator | 2026-02-05 00:58:58.975329 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-05 00:58:58.975336 | orchestrator | 2026-02-05 00:58:58.975342 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-05 00:58:58.975348 | orchestrator | Thursday 05 February 2026 00:57:26 +0000 (0:00:00.413) 0:00:00.963 ***** 2026-02-05 00:58:58.975355 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:58:58.975362 | orchestrator | 2026-02-05 00:58:58.975366 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-05 00:58:58.975371 | orchestrator | Thursday 05 February 2026 00:57:26 +0000 (0:00:00.434) 0:00:01.397 ***** 2026-02-05 00:58:58.975392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:58:58.975424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:58:58.975433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:58:58.975441 | orchestrator | 2026-02-05 00:58:58.975445 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-05 00:58:58.975449 | orchestrator | Thursday 05 February 2026 00:57:28 +0000 (0:00:01.060) 0:00:02.458 ***** 2026-02-05 00:58:58.975453 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:58:58.975457 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:58:58.975464 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:58:58.975469 | orchestrator | 2026-02-05 00:58:58.975475 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-05 00:58:58.975481 | orchestrator | Thursday 05 February 2026 00:57:28 +0000 (0:00:00.353) 0:00:02.811 ***** 2026-02-05 00:58:58.975486 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-05 00:58:58.975496 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-05 00:58:58.975502 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-05 00:58:58.975508 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-05 00:58:58.975514 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-05 00:58:58.975521 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-05 00:58:58.975527 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-05 00:58:58.975532 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-05 00:58:58.975539 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-05 00:58:58.975544 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-05 00:58:58.975549 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-05 00:58:58.975555 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-05 00:58:58.975561 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-05 00:58:58.975567 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-05 00:58:58.975572 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-05 00:58:58.975578 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-05 00:58:58.975584 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-05 00:58:58.975589 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-05 00:58:58.975595 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-05 00:58:58.975602 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-05 00:58:58.975612 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-05 00:58:58.975618 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-05 00:58:58.975624 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-05 00:58:58.975630 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-05 00:58:58.975637 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-05 00:58:58.975787 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-05 00:58:58.975803 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-05 00:58:58.975810 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-05 00:58:58.975816 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-05 00:58:58.975823 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-05 00:58:58.975830 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-05 00:58:58.975836 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-05 00:58:58.975843 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-05 00:58:58.975851 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-05 00:58:58.975857 | orchestrator | 2026-02-05 00:58:58.975863 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:58:58.975870 | orchestrator | Thursday 05 February 2026 00:57:29 +0000 (0:00:00.672) 0:00:03.483 ***** 2026-02-05 00:58:58.975878 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:58:58.975886 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:58:58.975894 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:58:58.975901 | orchestrator | 2026-02-05 00:58:58.975908 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:58:58.975915 | orchestrator | Thursday 05 February 2026 00:57:29 +0000 (0:00:00.257) 0:00:03.741 ***** 2026-02-05 00:58:58.975922 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.975931 | orchestrator | 2026-02-05 00:58:58.975948 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:58:58.975955 | orchestrator | Thursday 05 February 2026 00:57:29 +0000 (0:00:00.124) 0:00:03.865 ***** 2026-02-05 00:58:58.975962 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.975968 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:58.975974 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:58.975980 | orchestrator | 2026-02-05 00:58:58.975986 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:58:58.975992 | orchestrator | Thursday 05 February 2026 00:57:29 +0000 (0:00:00.399) 0:00:04.265 ***** 2026-02-05 00:58:58.975999 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:58:58.976005 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:58:58.976011 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:58:58.976016 | orchestrator | 2026-02-05 00:58:58.976022 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:58:58.976029 | orchestrator | Thursday 05 February 2026 00:57:30 +0000 (0:00:00.310) 0:00:04.576 ***** 2026-02-05 00:58:58.976035 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.976042 | orchestrator | 2026-02-05 00:58:58.976048 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:58:58.976055 | orchestrator | Thursday 05 February 2026 00:57:30 +0000 (0:00:00.141) 0:00:04.718 ***** 2026-02-05 00:58:58.976060 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.976067 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:58.976074 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:58.976088 | orchestrator | 2026-02-05 00:58:58.976096 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:58:58.976102 | orchestrator | Thursday 05 February 2026 00:57:30 +0000 (0:00:00.311) 0:00:05.029 ***** 2026-02-05 00:58:58.976108 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:58:58.976115 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:58:58.976121 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:58:58.976128 | orchestrator | 2026-02-05 00:58:58.976135 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:58:58.976142 | orchestrator | Thursday 05 February 2026 00:57:30 +0000 (0:00:00.311) 0:00:05.340 ***** 2026-02-05 00:58:58.976148 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.976155 | orchestrator | 2026-02-05 00:58:58.976161 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:58:58.976168 | orchestrator | Thursday 05 February 2026 00:57:31 +0000 (0:00:00.129) 0:00:05.469 ***** 2026-02-05 00:58:58.976181 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.976197 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:58.976203 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:58.976209 | orchestrator | 2026-02-05 00:58:58.976215 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:58:58.976221 | orchestrator | Thursday 05 February 2026 00:57:31 +0000 (0:00:00.441) 0:00:05.911 ***** 2026-02-05 00:58:58.976226 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:58:58.976232 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:58:58.976238 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:58:58.976245 | orchestrator | 2026-02-05 00:58:58.976252 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:58:58.976258 | orchestrator | Thursday 05 February 2026 00:57:31 +0000 (0:00:00.282) 0:00:06.193 ***** 2026-02-05 00:58:58.976265 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.976271 | orchestrator | 2026-02-05 00:58:58.976276 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:58:58.976282 | orchestrator | Thursday 05 February 2026 00:57:31 +0000 (0:00:00.114) 0:00:06.308 ***** 2026-02-05 00:58:58.976287 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.976294 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:58.976300 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:58.976305 | orchestrator | 2026-02-05 00:58:58.976311 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:58:58.976316 | orchestrator | Thursday 05 February 2026 00:57:32 +0000 (0:00:00.276) 0:00:06.584 ***** 2026-02-05 00:58:58.976321 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:58:58.976327 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:58:58.976333 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:58:58.976339 | orchestrator | 2026-02-05 00:58:58.976345 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:58:58.976351 | orchestrator | Thursday 05 February 2026 00:57:32 +0000 (0:00:00.296) 0:00:06.881 ***** 2026-02-05 00:58:58.976357 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.976363 | orchestrator | 2026-02-05 00:58:58.976369 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:58:58.976375 | orchestrator | Thursday 05 February 2026 00:57:32 +0000 (0:00:00.306) 0:00:07.188 ***** 2026-02-05 00:58:58.976381 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.976387 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:58.976393 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:58.976400 | orchestrator | 2026-02-05 00:58:58.976406 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:58:58.976412 | orchestrator | Thursday 05 February 2026 00:57:33 +0000 (0:00:00.278) 0:00:07.467 ***** 2026-02-05 00:58:58.976419 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:58:58.976425 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:58:58.976431 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:58:58.976437 | orchestrator | 2026-02-05 00:58:58.976445 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:58:58.976459 | orchestrator | Thursday 05 February 2026 00:57:33 +0000 (0:00:00.303) 0:00:07.770 ***** 2026-02-05 00:58:58.976466 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.976472 | orchestrator | 2026-02-05 00:58:58.976478 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:58:58.976484 | orchestrator | Thursday 05 February 2026 00:57:33 +0000 (0:00:00.120) 0:00:07.891 ***** 2026-02-05 00:58:58.976490 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.976496 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:58.976502 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:58.976508 | orchestrator | 2026-02-05 00:58:58.976514 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:58:58.976528 | orchestrator | Thursday 05 February 2026 00:57:33 +0000 (0:00:00.291) 0:00:08.183 ***** 2026-02-05 00:58:58.976536 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:58:58.976542 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:58:58.976548 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:58:58.976554 | orchestrator | 2026-02-05 00:58:58.976560 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:58:58.976566 | orchestrator | Thursday 05 February 2026 00:57:34 +0000 (0:00:00.467) 0:00:08.650 ***** 2026-02-05 00:58:58.976572 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.976578 | orchestrator | 2026-02-05 00:58:58.976585 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:58:58.976591 | orchestrator | Thursday 05 February 2026 00:57:34 +0000 (0:00:00.109) 0:00:08.760 ***** 2026-02-05 00:58:58.976597 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.976603 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:58.976610 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:58.976616 | orchestrator | 2026-02-05 00:58:58.976622 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:58:58.976629 | orchestrator | Thursday 05 February 2026 00:57:34 +0000 (0:00:00.297) 0:00:09.058 ***** 2026-02-05 00:58:58.976635 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:58:58.976670 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:58:58.976677 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:58:58.976683 | orchestrator | 2026-02-05 00:58:58.976689 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:58:58.976695 | orchestrator | Thursday 05 February 2026 00:57:34 +0000 (0:00:00.358) 0:00:09.417 ***** 2026-02-05 00:58:58.976700 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.976706 | orchestrator | 2026-02-05 00:58:58.976713 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:58:58.976719 | orchestrator | Thursday 05 February 2026 00:57:35 +0000 (0:00:00.125) 0:00:09.542 ***** 2026-02-05 00:58:58.976725 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.976732 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:58.976739 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:58.976745 | orchestrator | 2026-02-05 00:58:58.976752 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:58:58.976758 | orchestrator | Thursday 05 February 2026 00:57:35 +0000 (0:00:00.303) 0:00:09.846 ***** 2026-02-05 00:58:58.976765 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:58:58.976771 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:58:58.976778 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:58:58.976784 | orchestrator | 2026-02-05 00:58:58.976798 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:58:58.976806 | orchestrator | Thursday 05 February 2026 00:57:35 +0000 (0:00:00.561) 0:00:10.408 ***** 2026-02-05 00:58:58.976813 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.976819 | orchestrator | 2026-02-05 00:58:58.976826 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:58:58.976832 | orchestrator | Thursday 05 February 2026 00:57:36 +0000 (0:00:00.111) 0:00:10.519 ***** 2026-02-05 00:58:58.976845 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.976853 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:58.976859 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:58.976866 | orchestrator | 2026-02-05 00:58:58.976872 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-05 00:58:58.976878 | orchestrator | Thursday 05 February 2026 00:57:36 +0000 (0:00:00.276) 0:00:10.796 ***** 2026-02-05 00:58:58.976884 | orchestrator | ok: [testbed-node-0] 2026-02-05 00:58:58.976890 | orchestrator | ok: [testbed-node-1] 2026-02-05 00:58:58.976897 | orchestrator | ok: [testbed-node-2] 2026-02-05 00:58:58.976904 | orchestrator | 2026-02-05 00:58:58.976910 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-05 00:58:58.976917 | orchestrator | Thursday 05 February 2026 00:57:36 +0000 (0:00:00.322) 0:00:11.118 ***** 2026-02-05 00:58:58.976923 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.976930 | orchestrator | 2026-02-05 00:58:58.976937 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-05 00:58:58.976943 | orchestrator | Thursday 05 February 2026 00:57:36 +0000 (0:00:00.113) 0:00:11.232 ***** 2026-02-05 00:58:58.976950 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.976956 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:58.976962 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:58.976969 | orchestrator | 2026-02-05 00:58:58.976975 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-05 00:58:58.976982 | orchestrator | Thursday 05 February 2026 00:57:37 +0000 (0:00:00.466) 0:00:11.699 ***** 2026-02-05 00:58:58.976988 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:58:58.976994 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:58:58.977001 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:58:58.977007 | orchestrator | 2026-02-05 00:58:58.977014 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-05 00:58:58.977020 | orchestrator | Thursday 05 February 2026 00:57:38 +0000 (0:00:01.689) 0:00:13.389 ***** 2026-02-05 00:58:58.977026 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-05 00:58:58.977034 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-05 00:58:58.977040 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-05 00:58:58.977046 | orchestrator | 2026-02-05 00:58:58.977053 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-05 00:58:58.977059 | orchestrator | Thursday 05 February 2026 00:57:40 +0000 (0:00:01.865) 0:00:15.254 ***** 2026-02-05 00:58:58.977066 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-05 00:58:58.977074 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-05 00:58:58.977080 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-05 00:58:58.977086 | orchestrator | 2026-02-05 00:58:58.977092 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-05 00:58:58.977105 | orchestrator | Thursday 05 February 2026 00:57:42 +0000 (0:00:02.110) 0:00:17.365 ***** 2026-02-05 00:58:58.977112 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-05 00:58:58.977119 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-05 00:58:58.977125 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-05 00:58:58.977132 | orchestrator | 2026-02-05 00:58:58.977138 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-05 00:58:58.977145 | orchestrator | Thursday 05 February 2026 00:57:44 +0000 (0:00:01.821) 0:00:19.187 ***** 2026-02-05 00:58:58.977151 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.977162 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:58.977169 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:58.977175 | orchestrator | 2026-02-05 00:58:58.977181 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-05 00:58:58.977188 | orchestrator | Thursday 05 February 2026 00:57:45 +0000 (0:00:00.519) 0:00:19.706 ***** 2026-02-05 00:58:58.977194 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.977201 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:58.977206 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:58.977213 | orchestrator | 2026-02-05 00:58:58.977219 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-05 00:58:58.977226 | orchestrator | Thursday 05 February 2026 00:57:45 +0000 (0:00:00.343) 0:00:20.049 ***** 2026-02-05 00:58:58.977232 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:58:58.977239 | orchestrator | 2026-02-05 00:58:58.977245 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-05 00:58:58.977252 | orchestrator | Thursday 05 February 2026 00:57:46 +0000 (0:00:00.563) 0:00:20.613 ***** 2026-02-05 00:58:58.977273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:58:58.977290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:58:58.977306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:58:58.977314 | orchestrator | 2026-02-05 00:58:58.977321 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-05 00:58:58.977327 | orchestrator | Thursday 05 February 2026 00:57:47 +0000 (0:00:01.635) 0:00:22.249 ***** 2026-02-05 00:58:58.977343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 00:58:58.977355 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:58.977547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 00:58:58.977570 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.977584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 00:58:58.977591 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:58.977597 | orchestrator | 2026-02-05 00:58:58.977603 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-05 00:58:58.977610 | orchestrator | Thursday 05 February 2026 00:57:48 +0000 (0:00:00.663) 0:00:22.912 ***** 2026-02-05 00:58:58.977623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 00:58:58.977635 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.977672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 00:58:58.977679 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:58.977690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-05 00:58:58.977703 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:58.977709 | orchestrator | 2026-02-05 00:58:58.977715 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-05 00:58:58.977721 | orchestrator | Thursday 05 February 2026 00:57:49 +0000 (0:00:00.830) 0:00:23.743 ***** 2026-02-05 00:58:58.977731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:58:58.977743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:58:58.977759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-05 00:58:58.977766 | orchestrator | 2026-02-05 00:58:58.977778 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-05 00:58:58.977784 | orchestrator | Thursday 05 February 2026 00:57:50 +0000 (0:00:01.463) 0:00:25.206 ***** 2026-02-05 00:58:58.977791 | orchestrator | skipping: [testbed-node-0] 2026-02-05 00:58:58.977797 | orchestrator | skipping: [testbed-node-1] 2026-02-05 00:58:58.977804 | orchestrator | skipping: [testbed-node-2] 2026-02-05 00:58:58.977810 | orchestrator | 2026-02-05 00:58:58.977815 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-05 00:58:58.977822 | orchestrator | Thursday 05 February 2026 00:57:51 +0000 (0:00:00.299) 0:00:25.506 ***** 2026-02-05 00:58:58.977829 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 00:58:58.977834 | orchestrator | 2026-02-05 00:58:58.977840 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-05 00:58:58.977850 | orchestrator | Thursday 05 February 2026 00:57:51 +0000 (0:00:00.510) 0:00:26.016 ***** 2026-02-05 00:58:58.977856 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:58:58.977862 | orchestrator | 2026-02-05 00:58:58.977868 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-05 00:58:58.977874 | orchestrator | Thursday 05 February 2026 00:57:54 +0000 (0:00:02.506) 0:00:28.523 ***** 2026-02-05 00:58:58.977881 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:58:58.977887 | orchestrator | 2026-02-05 00:58:58.977894 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-05 00:58:58.977901 | orchestrator | Thursday 05 February 2026 00:57:56 +0000 (0:00:02.471) 0:00:30.994 ***** 2026-02-05 00:58:58.977909 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:58:58.977917 | orchestrator | 2026-02-05 00:58:58.977924 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-05 00:58:58.977930 | orchestrator | Thursday 05 February 2026 00:58:13 +0000 (0:00:17.259) 0:00:48.253 ***** 2026-02-05 00:58:58.977937 | orchestrator | 2026-02-05 00:58:58.977943 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-05 00:58:58.977949 | orchestrator | Thursday 05 February 2026 00:58:13 +0000 (0:00:00.057) 0:00:48.310 ***** 2026-02-05 00:58:58.977955 | orchestrator | 2026-02-05 00:58:58.977962 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-05 00:58:58.977969 | orchestrator | Thursday 05 February 2026 00:58:13 +0000 (0:00:00.065) 0:00:48.376 ***** 2026-02-05 00:58:58.977977 | orchestrator | 2026-02-05 00:58:58.977984 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-05 00:58:58.977991 | orchestrator | Thursday 05 February 2026 00:58:13 +0000 (0:00:00.062) 0:00:48.439 ***** 2026-02-05 00:58:58.977998 | orchestrator | changed: [testbed-node-0] 2026-02-05 00:58:58.978005 | orchestrator | changed: [testbed-node-1] 2026-02-05 00:58:58.978064 | orchestrator | changed: [testbed-node-2] 2026-02-05 00:58:58.978078 | orchestrator | 2026-02-05 00:58:58.978086 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:58:58.978094 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-05 00:58:58.978102 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-05 00:58:58.978114 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-05 00:58:58.978122 | orchestrator | 2026-02-05 00:58:58.978129 | orchestrator | 2026-02-05 00:58:58.978137 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:58:58.978144 | orchestrator | Thursday 05 February 2026 00:58:58 +0000 (0:00:44.508) 0:01:32.947 ***** 2026-02-05 00:58:58.978152 | orchestrator | =============================================================================== 2026-02-05 00:58:58.978159 | orchestrator | horizon : Restart horizon container ------------------------------------ 44.51s 2026-02-05 00:58:58.978177 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.26s 2026-02-05 00:58:58.978186 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.51s 2026-02-05 00:58:58.978193 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.47s 2026-02-05 00:58:58.978199 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.11s 2026-02-05 00:58:58.978205 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.87s 2026-02-05 00:58:58.978212 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.82s 2026-02-05 00:58:58.978220 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.69s 2026-02-05 00:58:58.978227 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.64s 2026-02-05 00:58:58.978234 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.46s 2026-02-05 00:58:58.978241 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.06s 2026-02-05 00:58:58.978247 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.83s 2026-02-05 00:58:58.978255 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.67s 2026-02-05 00:58:58.978262 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.66s 2026-02-05 00:58:58.978269 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2026-02-05 00:58:58.978276 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2026-02-05 00:58:58.978283 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.52s 2026-02-05 00:58:58.978290 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.51s 2026-02-05 00:58:58.978297 | orchestrator | horizon : Update policy file name --------------------------------------- 0.47s 2026-02-05 00:58:58.978305 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.47s 2026-02-05 00:58:58.978312 | orchestrator | 2026-02-05 00:58:58 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state STARTED 2026-02-05 00:58:58.978319 | orchestrator | 2026-02-05 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:02.023780 | orchestrator | 2026-02-05 00:59:02 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:02.025820 | orchestrator | 2026-02-05 00:59:02 | INFO  | Task 4f101267-b887-44b1-acaa-2a8cb5cc26a5 is in state SUCCESS 2026-02-05 00:59:02.027097 | orchestrator | 2026-02-05 00:59:02.027119 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-05 00:59:02.027125 | orchestrator | 2.16.14 2026-02-05 00:59:02.027131 | orchestrator | 2026-02-05 00:59:02.027137 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-05 00:59:02.027144 | orchestrator | 2026-02-05 00:59:02.027153 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-05 00:59:02.027161 | orchestrator | Thursday 05 February 2026 00:56:51 +0000 (0:00:00.603) 0:00:00.603 ***** 2026-02-05 00:59:02.027167 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:59:02.027174 | orchestrator | 2026-02-05 00:59:02.027180 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-05 00:59:02.027187 | orchestrator | Thursday 05 February 2026 00:56:52 +0000 (0:00:00.564) 0:00:01.167 ***** 2026-02-05 00:59:02.027193 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:59:02.027200 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:59:02.027206 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:59:02.027213 | orchestrator | 2026-02-05 00:59:02.027219 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-05 00:59:02.027225 | orchestrator | Thursday 05 February 2026 00:56:52 +0000 (0:00:00.653) 0:00:01.821 ***** 2026-02-05 00:59:02.027245 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:59:02.027290 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:59:02.027300 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:59:02.027310 | orchestrator | 2026-02-05 00:59:02.027316 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-05 00:59:02.027378 | orchestrator | Thursday 05 February 2026 00:56:53 +0000 (0:00:00.296) 0:00:02.117 ***** 2026-02-05 00:59:02.027384 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:59:02.027435 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:59:02.027439 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:59:02.027443 | orchestrator | 2026-02-05 00:59:02.027447 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-05 00:59:02.027451 | orchestrator | Thursday 05 February 2026 00:56:53 +0000 (0:00:00.838) 0:00:02.956 ***** 2026-02-05 00:59:02.027482 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:59:02.027487 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:59:02.027491 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:59:02.027494 | orchestrator | 2026-02-05 00:59:02.027498 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-05 00:59:02.027509 | orchestrator | Thursday 05 February 2026 00:56:54 +0000 (0:00:00.301) 0:00:03.257 ***** 2026-02-05 00:59:02.027513 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:59:02.027517 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:59:02.027521 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:59:02.027525 | orchestrator | 2026-02-05 00:59:02.027529 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-05 00:59:02.027533 | orchestrator | Thursday 05 February 2026 00:56:54 +0000 (0:00:00.363) 0:00:03.621 ***** 2026-02-05 00:59:02.027536 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:59:02.027541 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:59:02.027545 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:59:02.027548 | orchestrator | 2026-02-05 00:59:02.027552 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-05 00:59:02.027556 | orchestrator | Thursday 05 February 2026 00:56:54 +0000 (0:00:00.329) 0:00:03.950 ***** 2026-02-05 00:59:02.027560 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.027564 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.027568 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.027572 | orchestrator | 2026-02-05 00:59:02.027576 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-05 00:59:02.027580 | orchestrator | Thursday 05 February 2026 00:56:55 +0000 (0:00:00.526) 0:00:04.477 ***** 2026-02-05 00:59:02.027583 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:59:02.027587 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:59:02.027615 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:59:02.027759 | orchestrator | 2026-02-05 00:59:02.027765 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-05 00:59:02.027769 | orchestrator | Thursday 05 February 2026 00:56:55 +0000 (0:00:00.326) 0:00:04.803 ***** 2026-02-05 00:59:02.027773 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 00:59:02.027777 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:59:02.027781 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:59:02.027784 | orchestrator | 2026-02-05 00:59:02.027788 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-05 00:59:02.027792 | orchestrator | Thursday 05 February 2026 00:56:56 +0000 (0:00:00.723) 0:00:05.526 ***** 2026-02-05 00:59:02.027796 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:59:02.027820 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:59:02.027866 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:59:02.027870 | orchestrator | 2026-02-05 00:59:02.027874 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-05 00:59:02.027878 | orchestrator | Thursday 05 February 2026 00:56:56 +0000 (0:00:00.451) 0:00:05.977 ***** 2026-02-05 00:59:02.027984 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 00:59:02.027990 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:59:02.027994 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:59:02.027998 | orchestrator | 2026-02-05 00:59:02.028002 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-05 00:59:02.028006 | orchestrator | Thursday 05 February 2026 00:56:59 +0000 (0:00:02.129) 0:00:08.107 ***** 2026-02-05 00:59:02.028010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 00:59:02.028014 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 00:59:02.028018 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 00:59:02.028022 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.028026 | orchestrator | 2026-02-05 00:59:02.028051 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-05 00:59:02.028058 | orchestrator | Thursday 05 February 2026 00:56:59 +0000 (0:00:00.579) 0:00:08.687 ***** 2026-02-05 00:59:02.028064 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.028072 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.028078 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.028084 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.028090 | orchestrator | 2026-02-05 00:59:02.028097 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-05 00:59:02.028103 | orchestrator | Thursday 05 February 2026 00:57:00 +0000 (0:00:00.766) 0:00:09.453 ***** 2026-02-05 00:59:02.028111 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.028123 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.028130 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.028137 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.028143 | orchestrator | 2026-02-05 00:59:02.028151 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-05 00:59:02.028155 | orchestrator | Thursday 05 February 2026 00:57:00 +0000 (0:00:00.322) 0:00:09.776 ***** 2026-02-05 00:59:02.028160 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'bc945fbbc746', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-05 00:56:57.735670', 'end': '2026-02-05 00:56:57.774624', 'delta': '0:00:00.038954', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bc945fbbc746'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-05 00:59:02.028171 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'bd046436f7ce', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-05 00:56:58.416325', 'end': '2026-02-05 00:56:58.436948', 'delta': '0:00:00.020623', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bd046436f7ce'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-05 00:59:02.028190 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e80b855388d8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-05 00:56:58.896591', 'end': '2026-02-05 00:56:58.920834', 'delta': '0:00:00.024243', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e80b855388d8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-05 00:59:02.028195 | orchestrator | 2026-02-05 00:59:02.028199 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-05 00:59:02.028203 | orchestrator | Thursday 05 February 2026 00:57:00 +0000 (0:00:00.173) 0:00:09.949 ***** 2026-02-05 00:59:02.028206 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:59:02.028210 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:59:02.028214 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:59:02.028218 | orchestrator | 2026-02-05 00:59:02.028221 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-05 00:59:02.028225 | orchestrator | Thursday 05 February 2026 00:57:01 +0000 (0:00:00.436) 0:00:10.386 ***** 2026-02-05 00:59:02.028229 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-05 00:59:02.028233 | orchestrator | 2026-02-05 00:59:02.028237 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-05 00:59:02.028241 | orchestrator | Thursday 05 February 2026 00:57:03 +0000 (0:00:01.838) 0:00:12.225 ***** 2026-02-05 00:59:02.028244 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.028248 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.028252 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.028256 | orchestrator | 2026-02-05 00:59:02.028260 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-05 00:59:02.028266 | orchestrator | Thursday 05 February 2026 00:57:03 +0000 (0:00:00.291) 0:00:12.516 ***** 2026-02-05 00:59:02.028270 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.028273 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.028277 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.028281 | orchestrator | 2026-02-05 00:59:02.028285 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 00:59:02.028291 | orchestrator | Thursday 05 February 2026 00:57:03 +0000 (0:00:00.378) 0:00:12.895 ***** 2026-02-05 00:59:02.028295 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.028299 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.028303 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.028307 | orchestrator | 2026-02-05 00:59:02.028310 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-05 00:59:02.028314 | orchestrator | Thursday 05 February 2026 00:57:04 +0000 (0:00:00.439) 0:00:13.335 ***** 2026-02-05 00:59:02.028318 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:59:02.028322 | orchestrator | 2026-02-05 00:59:02.028325 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-05 00:59:02.028329 | orchestrator | Thursday 05 February 2026 00:57:04 +0000 (0:00:00.138) 0:00:13.473 ***** 2026-02-05 00:59:02.028333 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.028336 | orchestrator | 2026-02-05 00:59:02.028340 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-05 00:59:02.028344 | orchestrator | Thursday 05 February 2026 00:57:04 +0000 (0:00:00.209) 0:00:13.683 ***** 2026-02-05 00:59:02.028348 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.028351 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.028355 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.028359 | orchestrator | 2026-02-05 00:59:02.028363 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-05 00:59:02.028366 | orchestrator | Thursday 05 February 2026 00:57:04 +0000 (0:00:00.273) 0:00:13.956 ***** 2026-02-05 00:59:02.028370 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.028374 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.028378 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.028381 | orchestrator | 2026-02-05 00:59:02.028385 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-05 00:59:02.028389 | orchestrator | Thursday 05 February 2026 00:57:05 +0000 (0:00:00.307) 0:00:14.263 ***** 2026-02-05 00:59:02.028393 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.028396 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.028400 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.028404 | orchestrator | 2026-02-05 00:59:02.028408 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-05 00:59:02.028411 | orchestrator | Thursday 05 February 2026 00:57:05 +0000 (0:00:00.486) 0:00:14.750 ***** 2026-02-05 00:59:02.028415 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.028419 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.028423 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.028428 | orchestrator | 2026-02-05 00:59:02.028434 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-05 00:59:02.028440 | orchestrator | Thursday 05 February 2026 00:57:06 +0000 (0:00:00.320) 0:00:15.071 ***** 2026-02-05 00:59:02.028447 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.028453 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.028459 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.028465 | orchestrator | 2026-02-05 00:59:02.028483 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-05 00:59:02.028490 | orchestrator | Thursday 05 February 2026 00:57:06 +0000 (0:00:00.317) 0:00:15.388 ***** 2026-02-05 00:59:02.028496 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.028502 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.028508 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.028533 | orchestrator | 2026-02-05 00:59:02.028542 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-05 00:59:02.028546 | orchestrator | Thursday 05 February 2026 00:57:06 +0000 (0:00:00.303) 0:00:15.692 ***** 2026-02-05 00:59:02.028550 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.028553 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.028559 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.028571 | orchestrator | 2026-02-05 00:59:02.028577 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-05 00:59:02.028583 | orchestrator | Thursday 05 February 2026 00:57:07 +0000 (0:00:00.485) 0:00:16.178 ***** 2026-02-05 00:59:02.028590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9bc271eb--ec29--52a2--8b95--ff4dfb27e19f-osd--block--9bc271eb--ec29--52a2--8b95--ff4dfb27e19f', 'dm-uuid-LVM-W6nJ7ENqG04Qc7VCQLGpY2qnV5YhUZsM9A2LJ1qCPfepxWi2YXgpPfnxICTyGXCK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1b54f13f--3e23--5303--9525--7c2d84d571dd-osd--block--1b54f13f--3e23--5303--9525--7c2d84d571dd', 'dm-uuid-LVM-m0K1q4L1OkOvOG4NeS8BTL15y4z5NEn9UFn3b4FqGIYzR4nbwul6S35G1g1RcetS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028693 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:59:02.028700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9bc271eb--ec29--52a2--8b95--ff4dfb27e19f-osd--block--9bc271eb--ec29--52a2--8b95--ff4dfb27e19f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IjRVxd-4RY4-7Ai2-bA1z-fs6i-PQm0-O7Xwvo', 'scsi-0QEMU_QEMU_HARDDISK_b7bd6d63-837c-4716-bacc-a146e68be59b', 'scsi-SQEMU_QEMU_HARDDISK_b7bd6d63-837c-4716-bacc-a146e68be59b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:59:02.028718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50aca8a8--e8e5--56ca--ab64--02beaf30ee0c-osd--block--50aca8a8--e8e5--56ca--ab64--02beaf30ee0c', 'dm-uuid-LVM-l5wfutUVY3Mjb8LJUgAGN63VEFe8QeDcgf1NL2jk6HPybKIKRq4gPQh2wIOxCEWz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1b54f13f--3e23--5303--9525--7c2d84d571dd-osd--block--1b54f13f--3e23--5303--9525--7c2d84d571dd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HldCCt-GzaL-8wFL-FznN-K21O-j0j1-Ru1MgY', 'scsi-0QEMU_QEMU_HARDDISK_ea1b8944-91e5-47d3-baee-befb07fac7f3', 'scsi-SQEMU_QEMU_HARDDISK_ea1b8944-91e5-47d3-baee-befb07fac7f3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:59:02.028734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a29ad6cb--22eb--5988--a460--3c83981a9937-osd--block--a29ad6cb--22eb--5988--a460--3c83981a9937', 'dm-uuid-LVM-epSIr36ljuUUSxb0VExFke7F2vw1BxjalkkiqCKp0dPNXTyo0YKF4XVaW2IuH5iy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_65c05e60-3149-4d51-82d7-128e0fd85726', 'scsi-SQEMU_QEMU_HARDDISK_65c05e60-3149-4d51-82d7-128e0fd85726'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:59:02.028744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:59:02.028754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028777 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.028781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028785 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028789 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part1', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part14', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part15', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part16', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:59:02.028817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44714651--8fa8--5efe--842f--d8a32b49e267-osd--block--44714651--8fa8--5efe--842f--d8a32b49e267', 'dm-uuid-LVM-dnumPSiW5Qzo4z08hu51ndJXedhPfJ0xnavZvT8fOc4BESdC6y5GDXreFD41aFjQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--50aca8a8--e8e5--56ca--ab64--02beaf30ee0c-osd--block--50aca8a8--e8e5--56ca--ab64--02beaf30ee0c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XprKlX-LPsf-tmNA-oMKm-4JA4-WUIg-hPQ0uF', 'scsi-0QEMU_QEMU_HARDDISK_df9dffbb-fa4a-4614-acfc-458aacc61e85', 'scsi-SQEMU_QEMU_HARDDISK_df9dffbb-fa4a-4614-acfc-458aacc61e85'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:59:02.028827 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a29ad6cb--22eb--5988--a460--3c83981a9937-osd--block--a29ad6cb--22eb--5988--a460--3c83981a9937'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Bg2j1X-hOvO-h8sZ-lUHD-353c-2KrO-hqtt9F', 'scsi-0QEMU_QEMU_HARDDISK_36110d5e-3998-4d39-b163-f137840d584a', 'scsi-SQEMU_QEMU_HARDDISK_36110d5e-3998-4d39-b163-f137840d584a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:59:02.028832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--56069e6e--1b0b--5c3d--aabe--9f5e4e37a685-osd--block--56069e6e--1b0b--5c3d--aabe--9f5e4e37a685', 'dm-uuid-LVM-uidyyMD0HIQLmBR883qUZvI9z5lRQwQARuTCVNfpjHkjGfbH83dR6eQ4ZGxNkWE7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba105820-b7fd-4d06-b751-3e65d5700a2c', 'scsi-SQEMU_QEMU_HARDDISK_ba105820-b7fd-4d06-b751-3e65d5700a2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:59:02.028846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:59:02.028854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028858 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.028862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028877 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-05 00:59:02.028896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part1', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part14', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part15', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part16', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:59:02.028903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--44714651--8fa8--5efe--842f--d8a32b49e267-osd--block--44714651--8fa8--5efe--842f--d8a32b49e267'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UY5He4-ZO5Z-2Q7f-bsPy-bRbE-i0JZ-CnlGio', 'scsi-0QEMU_QEMU_HARDDISK_5c16bdfb-9776-4282-a52f-d0746538d24f', 'scsi-SQEMU_QEMU_HARDDISK_5c16bdfb-9776-4282-a52f-d0746538d24f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:59:02.028907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--56069e6e--1b0b--5c3d--aabe--9f5e4e37a685-osd--block--56069e6e--1b0b--5c3d--aabe--9f5e4e37a685'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-m1fmTw-heZ7-Ss0N-4Ikk-0ZW8-w1Ji-pHvzZ4', 'scsi-0QEMU_QEMU_HARDDISK_66450c46-76da-4fbd-b0f3-00f2a07ceccd', 'scsi-SQEMU_QEMU_HARDDISK_66450c46-76da-4fbd-b0f3-00f2a07ceccd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:59:02.028914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13446d2e-9611-4725-bf6d-ec20aba1d1c7', 'scsi-SQEMU_QEMU_HARDDISK_13446d2e-9611-4725-bf6d-ec20aba1d1c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:59:02.028921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-05 00:59:02.028925 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.028929 | orchestrator | 2026-02-05 00:59:02.028933 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-05 00:59:02.028937 | orchestrator | Thursday 05 February 2026 00:57:07 +0000 (0:00:00.516) 0:00:16.694 ***** 2026-02-05 00:59:02.028941 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9bc271eb--ec29--52a2--8b95--ff4dfb27e19f-osd--block--9bc271eb--ec29--52a2--8b95--ff4dfb27e19f', 'dm-uuid-LVM-W6nJ7ENqG04Qc7VCQLGpY2qnV5YhUZsM9A2LJ1qCPfepxWi2YXgpPfnxICTyGXCK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.028947 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1b54f13f--3e23--5303--9525--7c2d84d571dd-osd--block--1b54f13f--3e23--5303--9525--7c2d84d571dd', 'dm-uuid-LVM-m0K1q4L1OkOvOG4NeS8BTL15y4z5NEn9UFn3b4FqGIYzR4nbwul6S35G1g1RcetS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.028951 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.028956 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.028962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.028969 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.028974 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.028978 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.028984 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.028988 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--50aca8a8--e8e5--56ca--ab64--02beaf30ee0c-osd--block--50aca8a8--e8e5--56ca--ab64--02beaf30ee0c', 'dm-uuid-LVM-l5wfutUVY3Mjb8LJUgAGN63VEFe8QeDcgf1NL2jk6HPybKIKRq4gPQh2wIOxCEWz'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.028994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029000 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a29ad6cb--22eb--5988--a460--3c83981a9937-osd--block--a29ad6cb--22eb--5988--a460--3c83981a9937', 'dm-uuid-LVM-epSIr36ljuUUSxb0VExFke7F2vw1BxjalkkiqCKp0dPNXTyo0YKF4XVaW2IuH5iy'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029007 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part1', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part14', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part15', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part16', 'scsi-SQEMU_QEMU_HARDDISK_7e282df8-56f1-48b7-aab2-50ed79008a58-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029014 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029020 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9bc271eb--ec29--52a2--8b95--ff4dfb27e19f-osd--block--9bc271eb--ec29--52a2--8b95--ff4dfb27e19f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IjRVxd-4RY4-7Ai2-bA1z-fs6i-PQm0-O7Xwvo', 'scsi-0QEMU_QEMU_HARDDISK_b7bd6d63-837c-4716-bacc-a146e68be59b', 'scsi-SQEMU_QEMU_HARDDISK_b7bd6d63-837c-4716-bacc-a146e68be59b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029025 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029030 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1b54f13f--3e23--5303--9525--7c2d84d571dd-osd--block--1b54f13f--3e23--5303--9525--7c2d84d571dd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-HldCCt-GzaL-8wFL-FznN-K21O-j0j1-Ru1MgY', 'scsi-0QEMU_QEMU_HARDDISK_ea1b8944-91e5-47d3-baee-befb07fac7f3', 'scsi-SQEMU_QEMU_HARDDISK_ea1b8944-91e5-47d3-baee-befb07fac7f3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029035 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_65c05e60-3149-4d51-82d7-128e0fd85726', 'scsi-SQEMU_QEMU_HARDDISK_65c05e60-3149-4d51-82d7-128e0fd85726'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029042 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029046 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-03-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029054 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029058 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.029062 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029066 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029072 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029078 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029086 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part1', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part14', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part15', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part16', 'scsi-SQEMU_QEMU_HARDDISK_df0333c4-e8b6-4f70-a3d8-8b7b4108b7f6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029091 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--50aca8a8--e8e5--56ca--ab64--02beaf30ee0c-osd--block--50aca8a8--e8e5--56ca--ab64--02beaf30ee0c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-XprKlX-LPsf-tmNA-oMKm-4JA4-WUIg-hPQ0uF', 'scsi-0QEMU_QEMU_HARDDISK_df9dffbb-fa4a-4614-acfc-458aacc61e85', 'scsi-SQEMU_QEMU_HARDDISK_df9dffbb-fa4a-4614-acfc-458aacc61e85'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029097 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44714651--8fa8--5efe--842f--d8a32b49e267-osd--block--44714651--8fa8--5efe--842f--d8a32b49e267', 'dm-uuid-LVM-dnumPSiW5Qzo4z08hu51ndJXedhPfJ0xnavZvT8fOc4BESdC6y5GDXreFD41aFjQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029117 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a29ad6cb--22eb--5988--a460--3c83981a9937-osd--block--a29ad6cb--22eb--5988--a460--3c83981a9937'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Bg2j1X-hOvO-h8sZ-lUHD-353c-2KrO-hqtt9F', 'scsi-0QEMU_QEMU_HARDDISK_36110d5e-3998-4d39-b163-f137840d584a', 'scsi-SQEMU_QEMU_HARDDISK_36110d5e-3998-4d39-b163-f137840d584a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029124 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--56069e6e--1b0b--5c3d--aabe--9f5e4e37a685-osd--block--56069e6e--1b0b--5c3d--aabe--9f5e4e37a685', 'dm-uuid-LVM-uidyyMD0HIQLmBR883qUZvI9z5lRQwQARuTCVNfpjHkjGfbH83dR6eQ4ZGxNkWE7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029128 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba105820-b7fd-4d06-b751-3e65d5700a2c', 'scsi-SQEMU_QEMU_HARDDISK_ba105820-b7fd-4d06-b751-3e65d5700a2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029134 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029144 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029150 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.029156 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029162 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029174 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029182 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029189 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029199 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029210 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029222 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part1', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part14', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part15', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part16', 'scsi-SQEMU_QEMU_HARDDISK_c136665a-9242-437c-80b5-efc7e2d18f11-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029231 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--44714651--8fa8--5efe--842f--d8a32b49e267-osd--block--44714651--8fa8--5efe--842f--d8a32b49e267'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UY5He4-ZO5Z-2Q7f-bsPy-bRbE-i0JZ-CnlGio', 'scsi-0QEMU_QEMU_HARDDISK_5c16bdfb-9776-4282-a52f-d0746538d24f', 'scsi-SQEMU_QEMU_HARDDISK_5c16bdfb-9776-4282-a52f-d0746538d24f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029242 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--56069e6e--1b0b--5c3d--aabe--9f5e4e37a685-osd--block--56069e6e--1b0b--5c3d--aabe--9f5e4e37a685'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-m1fmTw-heZ7-Ss0N-4Ikk-0ZW8-w1Ji-pHvzZ4', 'scsi-0QEMU_QEMU_HARDDISK_66450c46-76da-4fbd-b0f3-00f2a07ceccd', 'scsi-SQEMU_QEMU_HARDDISK_66450c46-76da-4fbd-b0f3-00f2a07ceccd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029248 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13446d2e-9611-4725-bf6d-ec20aba1d1c7', 'scsi-SQEMU_QEMU_HARDDISK_13446d2e-9611-4725-bf6d-ec20aba1d1c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029259 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-05-00-03-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-05 00:59:02.029266 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.029272 | orchestrator | 2026-02-05 00:59:02.029278 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-05 00:59:02.029285 | orchestrator | Thursday 05 February 2026 00:57:08 +0000 (0:00:00.550) 0:00:17.244 ***** 2026-02-05 00:59:02.029292 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:59:02.029296 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:59:02.029300 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:59:02.029304 | orchestrator | 2026-02-05 00:59:02.029307 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-05 00:59:02.029311 | orchestrator | Thursday 05 February 2026 00:57:08 +0000 (0:00:00.682) 0:00:17.927 ***** 2026-02-05 00:59:02.029315 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:59:02.029319 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:59:02.029322 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:59:02.029326 | orchestrator | 2026-02-05 00:59:02.029330 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 00:59:02.029337 | orchestrator | Thursday 05 February 2026 00:57:09 +0000 (0:00:00.457) 0:00:18.384 ***** 2026-02-05 00:59:02.029341 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:59:02.029344 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:59:02.029348 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:59:02.029352 | orchestrator | 2026-02-05 00:59:02.029356 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 00:59:02.029360 | orchestrator | Thursday 05 February 2026 00:57:09 +0000 (0:00:00.643) 0:00:19.028 ***** 2026-02-05 00:59:02.029363 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.029367 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.029371 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.029375 | orchestrator | 2026-02-05 00:59:02.029379 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-05 00:59:02.029385 | orchestrator | Thursday 05 February 2026 00:57:10 +0000 (0:00:00.242) 0:00:19.270 ***** 2026-02-05 00:59:02.029388 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.029392 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.029396 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.029400 | orchestrator | 2026-02-05 00:59:02.029403 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-05 00:59:02.029407 | orchestrator | Thursday 05 February 2026 00:57:10 +0000 (0:00:00.359) 0:00:19.629 ***** 2026-02-05 00:59:02.029411 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.029415 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.029418 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.029422 | orchestrator | 2026-02-05 00:59:02.029426 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-05 00:59:02.029430 | orchestrator | Thursday 05 February 2026 00:57:10 +0000 (0:00:00.399) 0:00:20.029 ***** 2026-02-05 00:59:02.029434 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-05 00:59:02.029438 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-05 00:59:02.029441 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-05 00:59:02.029445 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-05 00:59:02.029449 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-05 00:59:02.029452 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-05 00:59:02.029456 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-05 00:59:02.029460 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-05 00:59:02.029464 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-05 00:59:02.029468 | orchestrator | 2026-02-05 00:59:02.029471 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-05 00:59:02.029475 | orchestrator | Thursday 05 February 2026 00:57:11 +0000 (0:00:00.812) 0:00:20.841 ***** 2026-02-05 00:59:02.029479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-05 00:59:02.029483 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-05 00:59:02.029487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-05 00:59:02.029491 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.029494 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-05 00:59:02.029498 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-05 00:59:02.029502 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-05 00:59:02.029506 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.029510 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-05 00:59:02.029513 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-05 00:59:02.029517 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-05 00:59:02.029521 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.029527 | orchestrator | 2026-02-05 00:59:02.029534 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-05 00:59:02.029548 | orchestrator | Thursday 05 February 2026 00:57:12 +0000 (0:00:00.327) 0:00:21.169 ***** 2026-02-05 00:59:02.029554 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 00:59:02.029561 | orchestrator | 2026-02-05 00:59:02.029567 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-05 00:59:02.029573 | orchestrator | Thursday 05 February 2026 00:57:12 +0000 (0:00:00.621) 0:00:21.790 ***** 2026-02-05 00:59:02.029583 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.029589 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.029596 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.029602 | orchestrator | 2026-02-05 00:59:02.029608 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-05 00:59:02.029615 | orchestrator | Thursday 05 February 2026 00:57:13 +0000 (0:00:00.297) 0:00:22.088 ***** 2026-02-05 00:59:02.029621 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.029628 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.029632 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.029636 | orchestrator | 2026-02-05 00:59:02.029662 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-05 00:59:02.029668 | orchestrator | Thursday 05 February 2026 00:57:13 +0000 (0:00:00.298) 0:00:22.387 ***** 2026-02-05 00:59:02.029674 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.029680 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.029686 | orchestrator | skipping: [testbed-node-5] 2026-02-05 00:59:02.029692 | orchestrator | 2026-02-05 00:59:02.029698 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-05 00:59:02.029704 | orchestrator | Thursday 05 February 2026 00:57:13 +0000 (0:00:00.307) 0:00:22.694 ***** 2026-02-05 00:59:02.029711 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:59:02.029717 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:59:02.029724 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:59:02.029731 | orchestrator | 2026-02-05 00:59:02.029735 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-05 00:59:02.029739 | orchestrator | Thursday 05 February 2026 00:57:14 +0000 (0:00:00.610) 0:00:23.304 ***** 2026-02-05 00:59:02.029742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:59:02.029746 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:59:02.029750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:59:02.029754 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.029758 | orchestrator | 2026-02-05 00:59:02.029761 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-05 00:59:02.029765 | orchestrator | Thursday 05 February 2026 00:57:14 +0000 (0:00:00.367) 0:00:23.671 ***** 2026-02-05 00:59:02.029769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:59:02.029773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:59:02.029780 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:59:02.029784 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.029787 | orchestrator | 2026-02-05 00:59:02.029791 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-05 00:59:02.029795 | orchestrator | Thursday 05 February 2026 00:57:14 +0000 (0:00:00.360) 0:00:24.032 ***** 2026-02-05 00:59:02.029799 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-05 00:59:02.029803 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-05 00:59:02.029806 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-05 00:59:02.029810 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.029814 | orchestrator | 2026-02-05 00:59:02.029818 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-05 00:59:02.029827 | orchestrator | Thursday 05 February 2026 00:57:15 +0000 (0:00:00.379) 0:00:24.411 ***** 2026-02-05 00:59:02.029831 | orchestrator | ok: [testbed-node-3] 2026-02-05 00:59:02.029835 | orchestrator | ok: [testbed-node-4] 2026-02-05 00:59:02.029838 | orchestrator | ok: [testbed-node-5] 2026-02-05 00:59:02.029842 | orchestrator | 2026-02-05 00:59:02.029846 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-05 00:59:02.029850 | orchestrator | Thursday 05 February 2026 00:57:15 +0000 (0:00:00.344) 0:00:24.756 ***** 2026-02-05 00:59:02.029854 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-05 00:59:02.029858 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-05 00:59:02.029861 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-05 00:59:02.029865 | orchestrator | 2026-02-05 00:59:02.029869 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-05 00:59:02.029873 | orchestrator | Thursday 05 February 2026 00:57:16 +0000 (0:00:00.540) 0:00:25.297 ***** 2026-02-05 00:59:02.029876 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 00:59:02.029880 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:59:02.029884 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:59:02.029888 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-05 00:59:02.029892 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 00:59:02.029896 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 00:59:02.029899 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 00:59:02.029903 | orchestrator | 2026-02-05 00:59:02.029907 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-05 00:59:02.029911 | orchestrator | Thursday 05 February 2026 00:57:17 +0000 (0:00:00.922) 0:00:26.220 ***** 2026-02-05 00:59:02.029915 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-05 00:59:02.029918 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-05 00:59:02.029922 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-05 00:59:02.029926 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-05 00:59:02.029930 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-05 00:59:02.029934 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-05 00:59:02.029941 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-05 00:59:02.029945 | orchestrator | 2026-02-05 00:59:02.029948 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-05 00:59:02.029952 | orchestrator | Thursday 05 February 2026 00:57:19 +0000 (0:00:01.897) 0:00:28.117 ***** 2026-02-05 00:59:02.029956 | orchestrator | skipping: [testbed-node-3] 2026-02-05 00:59:02.029960 | orchestrator | skipping: [testbed-node-4] 2026-02-05 00:59:02.029964 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-05 00:59:02.029967 | orchestrator | 2026-02-05 00:59:02.029971 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-05 00:59:02.029975 | orchestrator | Thursday 05 February 2026 00:57:19 +0000 (0:00:00.447) 0:00:28.565 ***** 2026-02-05 00:59:02.029979 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 00:59:02.029983 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 00:59:02.029990 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 00:59:02.029996 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 00:59:02.030000 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-05 00:59:02.030003 | orchestrator | 2026-02-05 00:59:02.030007 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-05 00:59:02.030049 | orchestrator | Thursday 05 February 2026 00:58:06 +0000 (0:00:46.556) 0:01:15.121 ***** 2026-02-05 00:59:02.030055 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:59:02.030059 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:59:02.030063 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:59:02.030067 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:59:02.030070 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:59:02.030074 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:59:02.030078 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-05 00:59:02.030082 | orchestrator | 2026-02-05 00:59:02.030086 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-05 00:59:02.030090 | orchestrator | Thursday 05 February 2026 00:58:31 +0000 (0:00:25.594) 0:01:40.716 ***** 2026-02-05 00:59:02.030093 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:59:02.030097 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:59:02.030101 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:59:02.030105 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:59:02.030109 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:59:02.030112 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:59:02.030116 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-05 00:59:02.030120 | orchestrator | 2026-02-05 00:59:02.030124 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-05 00:59:02.030127 | orchestrator | Thursday 05 February 2026 00:58:43 +0000 (0:00:11.982) 0:01:52.699 ***** 2026-02-05 00:59:02.030131 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:59:02.030135 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 00:59:02.030139 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 00:59:02.030143 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:59:02.030147 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 00:59:02.030154 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 00:59:02.030161 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:59:02.030165 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 00:59:02.030169 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 00:59:02.030173 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:59:02.030176 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 00:59:02.030180 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 00:59:02.030184 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:59:02.030188 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 00:59:02.030192 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 00:59:02.030206 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-05 00:59:02.030210 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-05 00:59:02.030214 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-05 00:59:02.030218 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-05 00:59:02.030222 | orchestrator | 2026-02-05 00:59:02.030226 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 00:59:02.030229 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-05 00:59:02.030234 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-05 00:59:02.030241 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-05 00:59:02.030245 | orchestrator | 2026-02-05 00:59:02.030249 | orchestrator | 2026-02-05 00:59:02.030253 | orchestrator | 2026-02-05 00:59:02.030257 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 00:59:02.030260 | orchestrator | Thursday 05 February 2026 00:59:01 +0000 (0:00:18.093) 0:02:10.792 ***** 2026-02-05 00:59:02.030264 | orchestrator | =============================================================================== 2026-02-05 00:59:02.030268 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.56s 2026-02-05 00:59:02.030272 | orchestrator | generate keys ---------------------------------------------------------- 25.59s 2026-02-05 00:59:02.030276 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.09s 2026-02-05 00:59:02.030280 | orchestrator | get keys from monitors ------------------------------------------------- 11.98s 2026-02-05 00:59:02.030283 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.13s 2026-02-05 00:59:02.030287 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.90s 2026-02-05 00:59:02.030291 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.84s 2026-02-05 00:59:02.030295 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.92s 2026-02-05 00:59:02.030299 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.84s 2026-02-05 00:59:02.030303 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.81s 2026-02-05 00:59:02.030306 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.77s 2026-02-05 00:59:02.030310 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.72s 2026-02-05 00:59:02.030314 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.68s 2026-02-05 00:59:02.030318 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.65s 2026-02-05 00:59:02.030324 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2026-02-05 00:59:02.030328 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.62s 2026-02-05 00:59:02.030332 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.61s 2026-02-05 00:59:02.030335 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.58s 2026-02-05 00:59:02.030339 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.56s 2026-02-05 00:59:02.030343 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.55s 2026-02-05 00:59:02.030347 | orchestrator | 2026-02-05 00:59:02 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:05.075756 | orchestrator | 2026-02-05 00:59:05 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:05.078307 | orchestrator | 2026-02-05 00:59:05 | INFO  | Task b1093909-c8ba-49a4-b420-97320d898c39 is in state STARTED 2026-02-05 00:59:05.078371 | orchestrator | 2026-02-05 00:59:05 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:08.119996 | orchestrator | 2026-02-05 00:59:08 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:08.121706 | orchestrator | 2026-02-05 00:59:08 | INFO  | Task b1093909-c8ba-49a4-b420-97320d898c39 is in state STARTED 2026-02-05 00:59:08.121765 | orchestrator | 2026-02-05 00:59:08 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:11.166472 | orchestrator | 2026-02-05 00:59:11 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:11.167451 | orchestrator | 2026-02-05 00:59:11 | INFO  | Task b1093909-c8ba-49a4-b420-97320d898c39 is in state STARTED 2026-02-05 00:59:11.167484 | orchestrator | 2026-02-05 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:14.214165 | orchestrator | 2026-02-05 00:59:14 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:14.216123 | orchestrator | 2026-02-05 00:59:14 | INFO  | Task b1093909-c8ba-49a4-b420-97320d898c39 is in state STARTED 2026-02-05 00:59:14.216199 | orchestrator | 2026-02-05 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:17.264407 | orchestrator | 2026-02-05 00:59:17 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:17.266254 | orchestrator | 2026-02-05 00:59:17 | INFO  | Task b1093909-c8ba-49a4-b420-97320d898c39 is in state STARTED 2026-02-05 00:59:17.266318 | orchestrator | 2026-02-05 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:20.318576 | orchestrator | 2026-02-05 00:59:20 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:20.320547 | orchestrator | 2026-02-05 00:59:20 | INFO  | Task b1093909-c8ba-49a4-b420-97320d898c39 is in state STARTED 2026-02-05 00:59:20.320619 | orchestrator | 2026-02-05 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:23.370092 | orchestrator | 2026-02-05 00:59:23 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:23.371400 | orchestrator | 2026-02-05 00:59:23 | INFO  | Task b1093909-c8ba-49a4-b420-97320d898c39 is in state STARTED 2026-02-05 00:59:23.371447 | orchestrator | 2026-02-05 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:26.432112 | orchestrator | 2026-02-05 00:59:26 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:26.436233 | orchestrator | 2026-02-05 00:59:26 | INFO  | Task b1093909-c8ba-49a4-b420-97320d898c39 is in state STARTED 2026-02-05 00:59:26.436301 | orchestrator | 2026-02-05 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:29.480400 | orchestrator | 2026-02-05 00:59:29 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:29.482485 | orchestrator | 2026-02-05 00:59:29 | INFO  | Task b1093909-c8ba-49a4-b420-97320d898c39 is in state STARTED 2026-02-05 00:59:29.482878 | orchestrator | 2026-02-05 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:32.531048 | orchestrator | 2026-02-05 00:59:32 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:32.533512 | orchestrator | 2026-02-05 00:59:32 | INFO  | Task b1093909-c8ba-49a4-b420-97320d898c39 is in state STARTED 2026-02-05 00:59:32.533719 | orchestrator | 2026-02-05 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:35.583051 | orchestrator | 2026-02-05 00:59:35 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:35.585209 | orchestrator | 2026-02-05 00:59:35 | INFO  | Task b1093909-c8ba-49a4-b420-97320d898c39 is in state STARTED 2026-02-05 00:59:35.585290 | orchestrator | 2026-02-05 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:38.622116 | orchestrator | 2026-02-05 00:59:38 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:38.622436 | orchestrator | 2026-02-05 00:59:38 | INFO  | Task b1093909-c8ba-49a4-b420-97320d898c39 is in state STARTED 2026-02-05 00:59:38.622454 | orchestrator | 2026-02-05 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:41.674446 | orchestrator | 2026-02-05 00:59:41 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:41.675551 | orchestrator | 2026-02-05 00:59:41 | INFO  | Task b1093909-c8ba-49a4-b420-97320d898c39 is in state SUCCESS 2026-02-05 00:59:41.675690 | orchestrator | 2026-02-05 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:44.726458 | orchestrator | 2026-02-05 00:59:44 | INFO  | Task b8d5a301-b486-4de3-8ba7-3565ab07c528 is in state STARTED 2026-02-05 00:59:44.728034 | orchestrator | 2026-02-05 00:59:44 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:44.728189 | orchestrator | 2026-02-05 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:47.774991 | orchestrator | 2026-02-05 00:59:47 | INFO  | Task b8d5a301-b486-4de3-8ba7-3565ab07c528 is in state STARTED 2026-02-05 00:59:47.776078 | orchestrator | 2026-02-05 00:59:47 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:47.776139 | orchestrator | 2026-02-05 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:50.815124 | orchestrator | 2026-02-05 00:59:50 | INFO  | Task b8d5a301-b486-4de3-8ba7-3565ab07c528 is in state STARTED 2026-02-05 00:59:50.816773 | orchestrator | 2026-02-05 00:59:50 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:50.816826 | orchestrator | 2026-02-05 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:53.855335 | orchestrator | 2026-02-05 00:59:53 | INFO  | Task b8d5a301-b486-4de3-8ba7-3565ab07c528 is in state STARTED 2026-02-05 00:59:53.857040 | orchestrator | 2026-02-05 00:59:53 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:53.857098 | orchestrator | 2026-02-05 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:56.898325 | orchestrator | 2026-02-05 00:59:56 | INFO  | Task b8d5a301-b486-4de3-8ba7-3565ab07c528 is in state STARTED 2026-02-05 00:59:56.899755 | orchestrator | 2026-02-05 00:59:56 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:56.899842 | orchestrator | 2026-02-05 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-02-05 00:59:59.943025 | orchestrator | 2026-02-05 00:59:59 | INFO  | Task b8d5a301-b486-4de3-8ba7-3565ab07c528 is in state STARTED 2026-02-05 00:59:59.945960 | orchestrator | 2026-02-05 00:59:59 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state STARTED 2026-02-05 00:59:59.946063 | orchestrator | 2026-02-05 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:02.984475 | orchestrator | 2026-02-05 01:00:02 | INFO  | Task b8d5a301-b486-4de3-8ba7-3565ab07c528 is in state STARTED 2026-02-05 01:00:02.986197 | orchestrator | 2026-02-05 01:00:02 | INFO  | Task b6f0e9cb-49d1-4dcc-91f2-4e5547a2a21c is in state SUCCESS 2026-02-05 01:00:02.986339 | orchestrator | 2026-02-05 01:00:02.986346 | orchestrator | 2026-02-05 01:00:02.986350 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-05 01:00:02.986355 | orchestrator | 2026-02-05 01:00:02.986359 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-05 01:00:02.986364 | orchestrator | Thursday 05 February 2026 00:59:06 +0000 (0:00:00.172) 0:00:00.172 ***** 2026-02-05 01:00:02.986369 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-05 01:00:02.986375 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-05 01:00:02.986378 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-05 01:00:02.986382 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 01:00:02.986386 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-05 01:00:02.986390 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-05 01:00:02.986394 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-05 01:00:02.986398 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-05 01:00:02.986402 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-05 01:00:02.986405 | orchestrator | 2026-02-05 01:00:02.986409 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-05 01:00:02.986413 | orchestrator | Thursday 05 February 2026 00:59:10 +0000 (0:00:04.780) 0:00:04.953 ***** 2026-02-05 01:00:02.986417 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-05 01:00:02.986421 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-05 01:00:02.986425 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-05 01:00:02.986429 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 01:00:02.986432 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-05 01:00:02.986436 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-05 01:00:02.986440 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-05 01:00:02.986444 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-05 01:00:02.986448 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-05 01:00:02.986452 | orchestrator | 2026-02-05 01:00:02.986455 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-05 01:00:02.986479 | orchestrator | Thursday 05 February 2026 00:59:15 +0000 (0:00:04.619) 0:00:09.572 ***** 2026-02-05 01:00:02.986484 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-05 01:00:02.986491 | orchestrator | 2026-02-05 01:00:02.986497 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-05 01:00:02.986502 | orchestrator | Thursday 05 February 2026 00:59:16 +0000 (0:00:01.028) 0:00:10.601 ***** 2026-02-05 01:00:02.986509 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-05 01:00:02.986515 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-05 01:00:02.986522 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-05 01:00:02.986528 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 01:00:02.986582 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-05 01:00:02.986589 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-05 01:00:02.986593 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-05 01:00:02.986597 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-05 01:00:02.986601 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-05 01:00:02.986605 | orchestrator | 2026-02-05 01:00:02.986609 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-05 01:00:02.986616 | orchestrator | Thursday 05 February 2026 00:59:30 +0000 (0:00:13.983) 0:00:24.585 ***** 2026-02-05 01:00:02.986867 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-05 01:00:02.986886 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-05 01:00:02.986893 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-05 01:00:02.986899 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-05 01:00:02.986927 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-05 01:00:02.986935 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-05 01:00:02.986939 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-05 01:00:02.986943 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-05 01:00:02.986947 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-05 01:00:02.986953 | orchestrator | 2026-02-05 01:00:02.986959 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-05 01:00:02.986966 | orchestrator | Thursday 05 February 2026 00:59:33 +0000 (0:00:03.115) 0:00:27.700 ***** 2026-02-05 01:00:02.986974 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-05 01:00:02.986980 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-05 01:00:02.986986 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-05 01:00:02.986992 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 01:00:02.986998 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-05 01:00:02.987004 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-05 01:00:02.987011 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-05 01:00:02.987017 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-05 01:00:02.987023 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-05 01:00:02.987041 | orchestrator | 2026-02-05 01:00:02.987048 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:00:02.987053 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:00:02.987059 | orchestrator | 2026-02-05 01:00:02.987063 | orchestrator | 2026-02-05 01:00:02.987067 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:00:02.987070 | orchestrator | Thursday 05 February 2026 00:59:40 +0000 (0:00:06.892) 0:00:34.593 ***** 2026-02-05 01:00:02.987074 | orchestrator | =============================================================================== 2026-02-05 01:00:02.987078 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.98s 2026-02-05 01:00:02.987082 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.89s 2026-02-05 01:00:02.987086 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.78s 2026-02-05 01:00:02.987089 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.62s 2026-02-05 01:00:02.987093 | orchestrator | Check if target directories exist --------------------------------------- 3.12s 2026-02-05 01:00:02.987097 | orchestrator | Create share directory -------------------------------------------------- 1.03s 2026-02-05 01:00:02.987101 | orchestrator | 2026-02-05 01:00:02.987385 | orchestrator | 2026-02-05 01:00:02.987399 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:00:02.987403 | orchestrator | 2026-02-05 01:00:02.987495 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:00:02.987501 | orchestrator | Thursday 05 February 2026 00:57:25 +0000 (0:00:00.237) 0:00:00.237 ***** 2026-02-05 01:00:02.987505 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:00:02.987509 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:00:02.987513 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:00:02.987519 | orchestrator | 2026-02-05 01:00:02.987525 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:00:02.987532 | orchestrator | Thursday 05 February 2026 00:57:26 +0000 (0:00:00.277) 0:00:00.515 ***** 2026-02-05 01:00:02.987541 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-05 01:00:02.987549 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-05 01:00:02.987555 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-05 01:00:02.987561 | orchestrator | 2026-02-05 01:00:02.987567 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-05 01:00:02.987574 | orchestrator | 2026-02-05 01:00:02.987580 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-05 01:00:02.987586 | orchestrator | Thursday 05 February 2026 00:57:26 +0000 (0:00:00.373) 0:00:00.888 ***** 2026-02-05 01:00:02.987592 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:00:02.987600 | orchestrator | 2026-02-05 01:00:02.987604 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-05 01:00:02.987608 | orchestrator | Thursday 05 February 2026 00:57:26 +0000 (0:00:00.488) 0:00:01.377 ***** 2026-02-05 01:00:02.987682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 01:00:02.987703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 01:00:02.987717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 01:00:02.987724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 01:00:02.987947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 01:00:02.987967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 01:00:02.987983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 01:00:02.987992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 01:00:02.987999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 01:00:02.988034 | orchestrator | 2026-02-05 01:00:02.988041 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-05 01:00:02.988045 | orchestrator | Thursday 05 February 2026 00:57:28 +0000 (0:00:01.691) 0:00:03.069 ***** 2026-02-05 01:00:02.988049 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:00:02.988054 | orchestrator | 2026-02-05 01:00:02.988071 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-05 01:00:02.988076 | orchestrator | Thursday 05 February 2026 00:57:28 +0000 (0:00:00.131) 0:00:03.200 ***** 2026-02-05 01:00:02.988080 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:00:02.988083 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:00:02.988087 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:00:02.988091 | orchestrator | 2026-02-05 01:00:02.988095 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-05 01:00:02.988099 | orchestrator | Thursday 05 February 2026 00:57:29 +0000 (0:00:00.366) 0:00:03.567 ***** 2026-02-05 01:00:02.988103 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:00:02.988106 | orchestrator | 2026-02-05 01:00:02.988110 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-05 01:00:02.988114 | orchestrator | Thursday 05 February 2026 00:57:29 +0000 (0:00:00.851) 0:00:04.418 ***** 2026-02-05 01:00:02.988118 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:00:02.988122 | orchestrator | 2026-02-05 01:00:02.988125 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-05 01:00:02.988129 | orchestrator | Thursday 05 February 2026 00:57:30 +0000 (0:00:00.528) 0:00:04.947 ***** 2026-02-05 01:00:02.988137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 01:00:02.988150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 01:00:02.988154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 01:00:02.988164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 01:00:02.988168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 01:00:02.988179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 01:00:02.988183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 01:00:02.988187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 01:00:02.988199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 01:00:02.988204 | orchestrator | 2026-02-05 01:00:02.988208 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-05 01:00:02.988211 | orchestrator | Thursday 05 February 2026 00:57:33 +0000 (0:00:03.316) 0:00:08.263 ***** 2026-02-05 01:00:02.988220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 01:00:02.988225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 01:00:02.988239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 01:00:02.988244 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:00:02.988248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 01:00:02.988252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 01:00:02.988256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 01:00:02.988260 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:00:02.988267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 01:00:02.988279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 01:00:02.988283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 01:00:02.988288 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:00:02.988291 | orchestrator | 2026-02-05 01:00:02.988295 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-05 01:00:02.988299 | orchestrator | Thursday 05 February 2026 00:57:34 +0000 (0:00:00.746) 0:00:09.010 ***** 2026-02-05 01:00:02.988312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 01:00:02.988317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 01:00:02.988325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 01:00:02.988332 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:00:02.988340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 01:00:02.988344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 01:00:02.988351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 01:00:02.988357 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:00:02.988363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 01:00:02.988374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 01:00:02.988388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 01:00:02.988398 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:00:02.988403 | orchestrator | 2026-02-05 01:00:02.988409 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-05 01:00:02.988415 | orchestrator | Thursday 05 February 2026 00:57:35 +0000 (0:00:00.731) 0:00:09.741 ***** 2026-02-05 01:00:02.988425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 01:00:02.988431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 01:00:02.988444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 01:00:02.988456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 01:00:02.988462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 01:00:02.988470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 01:00:02.988476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 01:00:02.988482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 01:00:02.988488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 01:00:02.988497 | orchestrator | 2026-02-05 01:00:02.988503 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-05 01:00:02.988508 | orchestrator | Thursday 05 February 2026 00:57:38 +0000 (0:00:03.318) 0:00:13.060 ***** 2026-02-05 01:00:02.988519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 01:00:02.988528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 01:00:02.988535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 01:00:02.988541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 01:00:02.988551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 01:00:02.988560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 01:00:02.988566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 01:00:02.988575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 01:00:02.988581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 01:00:02.988587 | orchestrator | 2026-02-05 01:00:02.988592 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-05 01:00:02.988597 | orchestrator | Thursday 05 February 2026 00:57:43 +0000 (0:00:05.179) 0:00:18.239 ***** 2026-02-05 01:00:02.988603 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:00:02.988608 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:00:02.988614 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:00:02.988663 | orchestrator | 2026-02-05 01:00:02.988670 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-05 01:00:02.988676 | orchestrator | Thursday 05 February 2026 00:57:45 +0000 (0:00:01.392) 0:00:19.632 ***** 2026-02-05 01:00:02.988682 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:00:02.988688 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:00:02.988698 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:00:02.988702 | orchestrator | 2026-02-05 01:00:02.988707 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-05 01:00:02.988712 | orchestrator | Thursday 05 February 2026 00:57:45 +0000 (0:00:00.542) 0:00:20.174 ***** 2026-02-05 01:00:02.988790 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:00:02.988797 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:00:02.988806 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:00:02.988813 | orchestrator | 2026-02-05 01:00:02.988819 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-05 01:00:02.988826 | orchestrator | Thursday 05 February 2026 00:57:45 +0000 (0:00:00.296) 0:00:20.470 ***** 2026-02-05 01:00:02.988832 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:00:02.988838 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:00:02.988845 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:00:02.988850 | orchestrator | 2026-02-05 01:00:02.988853 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-05 01:00:02.988857 | orchestrator | Thursday 05 February 2026 00:57:46 +0000 (0:00:00.463) 0:00:20.934 ***** 2026-02-05 01:00:02.988870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 01:00:02.988882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 01:00:02.988887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 01:00:02.988891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 01:00:02.988900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 01:00:02.988905 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:00:02.988916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 01:00:02.988922 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:00:02.988928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-05 01:00:02.988938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-05 01:00:02.988945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-05 01:00:02.988955 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:00:02.988961 | orchestrator | 2026-02-05 01:00:02.988967 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-05 01:00:02.988974 | orchestrator | Thursday 05 February 2026 00:57:47 +0000 (0:00:00.739) 0:00:21.674 ***** 2026-02-05 01:00:02.988980 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:00:02.988986 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:00:02.988993 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:00:02.989000 | orchestrator | 2026-02-05 01:00:02.989008 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-05 01:00:02.989015 | orchestrator | Thursday 05 February 2026 00:57:47 +0000 (0:00:00.298) 0:00:21.973 ***** 2026-02-05 01:00:02.989020 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-05 01:00:02.989027 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-05 01:00:02.989033 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-05 01:00:02.989039 | orchestrator | 2026-02-05 01:00:02.989045 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-05 01:00:02.989052 | orchestrator | Thursday 05 February 2026 00:57:49 +0000 (0:00:01.582) 0:00:23.555 ***** 2026-02-05 01:00:02.989058 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:00:02.989066 | orchestrator | 2026-02-05 01:00:02.989070 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-05 01:00:02.989073 | orchestrator | Thursday 05 February 2026 00:57:50 +0000 (0:00:00.934) 0:00:24.489 ***** 2026-02-05 01:00:02.989077 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:00:02.989081 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:00:02.989085 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:00:02.989089 | orchestrator | 2026-02-05 01:00:02.989092 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-05 01:00:02.989096 | orchestrator | Thursday 05 February 2026 00:57:50 +0000 (0:00:00.729) 0:00:25.218 ***** 2026-02-05 01:00:02.989100 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-05 01:00:02.989104 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:00:02.989107 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-05 01:00:02.989111 | orchestrator | 2026-02-05 01:00:02.989115 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-05 01:00:02.989122 | orchestrator | Thursday 05 February 2026 00:57:51 +0000 (0:00:01.062) 0:00:26.281 ***** 2026-02-05 01:00:02.989126 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:00:02.989130 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:00:02.989134 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:00:02.989138 | orchestrator | 2026-02-05 01:00:02.989142 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-05 01:00:02.989146 | orchestrator | Thursday 05 February 2026 00:57:52 +0000 (0:00:00.293) 0:00:26.574 ***** 2026-02-05 01:00:02.989149 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-05 01:00:02.989153 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-05 01:00:02.989157 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-05 01:00:02.989161 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-05 01:00:02.989165 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-05 01:00:02.989169 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-05 01:00:02.989172 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-05 01:00:02.989182 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-05 01:00:02.989186 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-05 01:00:02.989189 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-05 01:00:02.989196 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-05 01:00:02.989200 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-05 01:00:02.989204 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-05 01:00:02.989208 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-05 01:00:02.989212 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-05 01:00:02.989215 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-05 01:00:02.989219 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-05 01:00:02.989223 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-05 01:00:02.989227 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-05 01:00:02.989231 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-05 01:00:02.989234 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-05 01:00:02.989238 | orchestrator | 2026-02-05 01:00:02.989242 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-05 01:00:02.989246 | orchestrator | Thursday 05 February 2026 00:58:01 +0000 (0:00:09.111) 0:00:35.686 ***** 2026-02-05 01:00:02.989249 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-05 01:00:02.989253 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-05 01:00:02.989257 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-05 01:00:02.989261 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-05 01:00:02.989265 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-05 01:00:02.989268 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-05 01:00:02.989272 | orchestrator | 2026-02-05 01:00:02.989276 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-05 01:00:02.989280 | orchestrator | Thursday 05 February 2026 00:58:04 +0000 (0:00:03.092) 0:00:38.778 ***** 2026-02-05 01:00:02.989288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 01:00:02.989317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 01:00:02.989329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-05 01:00:02.989334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 01:00:02.989338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 01:00:02.989342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-05 01:00:02.989350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 01:00:02.989358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 01:00:02.989365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-05 01:00:02.989369 | orchestrator | 2026-02-05 01:00:02.989373 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-05 01:00:02.989377 | orchestrator | Thursday 05 February 2026 00:58:06 +0000 (0:00:02.435) 0:00:41.213 ***** 2026-02-05 01:00:02.989381 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:00:02.989385 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:00:02.989388 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:00:02.989392 | orchestrator | 2026-02-05 01:00:02.989396 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-05 01:00:02.989400 | orchestrator | Thursday 05 February 2026 00:58:07 +0000 (0:00:00.313) 0:00:41.526 ***** 2026-02-05 01:00:02.989404 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:00:02.989407 | orchestrator | 2026-02-05 01:00:02.989411 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-05 01:00:02.989415 | orchestrator | Thursday 05 February 2026 00:58:09 +0000 (0:00:02.637) 0:00:44.163 ***** 2026-02-05 01:00:02.989419 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:00:02.989423 | orchestrator | 2026-02-05 01:00:02.989427 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-05 01:00:02.989432 | orchestrator | Thursday 05 February 2026 00:58:12 +0000 (0:00:02.666) 0:00:46.830 ***** 2026-02-05 01:00:02.989437 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:00:02.989441 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:00:02.989446 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:00:02.989451 | orchestrator | 2026-02-05 01:00:02.989456 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-05 01:00:02.989460 | orchestrator | Thursday 05 February 2026 00:58:13 +0000 (0:00:00.854) 0:00:47.684 ***** 2026-02-05 01:00:02.989465 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:00:02.989469 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:00:02.989474 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:00:02.989478 | orchestrator | 2026-02-05 01:00:02.989483 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-05 01:00:02.989487 | orchestrator | Thursday 05 February 2026 00:58:13 +0000 (0:00:00.418) 0:00:48.103 ***** 2026-02-05 01:00:02.989492 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:00:02.989500 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:00:02.989505 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:00:02.989509 | orchestrator | 2026-02-05 01:00:02.989513 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-05 01:00:02.989518 | orchestrator | Thursday 05 February 2026 00:58:13 +0000 (0:00:00.301) 0:00:48.405 ***** 2026-02-05 01:00:02.989522 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:00:02.989527 | orchestrator | 2026-02-05 01:00:02.989532 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-05 01:00:02.989537 | orchestrator | Thursday 05 February 2026 00:58:29 +0000 (0:00:15.960) 0:01:04.365 ***** 2026-02-05 01:00:02.989541 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:00:02.989546 | orchestrator | 2026-02-05 01:00:02.989551 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-05 01:00:02.989555 | orchestrator | Thursday 05 February 2026 00:58:41 +0000 (0:00:11.831) 0:01:16.196 ***** 2026-02-05 01:00:02.989560 | orchestrator | 2026-02-05 01:00:02.989565 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-05 01:00:02.989569 | orchestrator | Thursday 05 February 2026 00:58:41 +0000 (0:00:00.064) 0:01:16.260 ***** 2026-02-05 01:00:02.989574 | orchestrator | 2026-02-05 01:00:02.989578 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-05 01:00:02.989585 | orchestrator | Thursday 05 February 2026 00:58:41 +0000 (0:00:00.061) 0:01:16.322 ***** 2026-02-05 01:00:02.989590 | orchestrator | 2026-02-05 01:00:02.989595 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-05 01:00:02.989599 | orchestrator | Thursday 05 February 2026 00:58:41 +0000 (0:00:00.069) 0:01:16.392 ***** 2026-02-05 01:00:02.989604 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:00:02.989608 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:00:02.989613 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:00:02.989617 | orchestrator | 2026-02-05 01:00:02.989640 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-05 01:00:02.989645 | orchestrator | Thursday 05 February 2026 00:58:52 +0000 (0:00:10.533) 0:01:26.925 ***** 2026-02-05 01:00:02.989649 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:00:02.989654 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:00:02.989658 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:00:02.989663 | orchestrator | 2026-02-05 01:00:02.989667 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-05 01:00:02.989672 | orchestrator | Thursday 05 February 2026 00:58:56 +0000 (0:00:04.557) 0:01:31.482 ***** 2026-02-05 01:00:02.989676 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:00:02.989681 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:00:02.989686 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:00:02.989690 | orchestrator | 2026-02-05 01:00:02.989695 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-05 01:00:02.989699 | orchestrator | Thursday 05 February 2026 00:59:04 +0000 (0:00:07.435) 0:01:38.918 ***** 2026-02-05 01:00:02.989705 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:00:02.989709 | orchestrator | 2026-02-05 01:00:02.989714 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-05 01:00:02.989718 | orchestrator | Thursday 05 February 2026 00:59:04 +0000 (0:00:00.563) 0:01:39.482 ***** 2026-02-05 01:00:02.989723 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:00:02.989728 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:00:02.989736 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:00:02.989741 | orchestrator | 2026-02-05 01:00:02.989746 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-05 01:00:02.989750 | orchestrator | Thursday 05 February 2026 00:59:05 +0000 (0:00:00.997) 0:01:40.480 ***** 2026-02-05 01:00:02.989755 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:00:02.989759 | orchestrator | 2026-02-05 01:00:02.989764 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-05 01:00:02.989773 | orchestrator | Thursday 05 February 2026 00:59:07 +0000 (0:00:01.676) 0:01:42.156 ***** 2026-02-05 01:00:02.989778 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-05 01:00:02.989782 | orchestrator | 2026-02-05 01:00:02.989787 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-05 01:00:02.989791 | orchestrator | Thursday 05 February 2026 00:59:21 +0000 (0:00:13.472) 0:01:55.629 ***** 2026-02-05 01:00:02.989796 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-05 01:00:02.989801 | orchestrator | 2026-02-05 01:00:02.989806 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-05 01:00:02.989810 | orchestrator | Thursday 05 February 2026 00:59:49 +0000 (0:00:28.400) 0:02:24.030 ***** 2026-02-05 01:00:02.989815 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-05 01:00:02.989820 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-05 01:00:02.989825 | orchestrator | 2026-02-05 01:00:02.989830 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-05 01:00:02.989835 | orchestrator | Thursday 05 February 2026 00:59:56 +0000 (0:00:07.324) 0:02:31.354 ***** 2026-02-05 01:00:02.989839 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:00:02.989842 | orchestrator | 2026-02-05 01:00:02.989846 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-05 01:00:02.989850 | orchestrator | Thursday 05 February 2026 00:59:56 +0000 (0:00:00.107) 0:02:31.461 ***** 2026-02-05 01:00:02.989854 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:00:02.989858 | orchestrator | 2026-02-05 01:00:02.989861 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-05 01:00:02.989865 | orchestrator | Thursday 05 February 2026 00:59:57 +0000 (0:00:00.103) 0:02:31.565 ***** 2026-02-05 01:00:02.989869 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:00:02.989873 | orchestrator | 2026-02-05 01:00:02.989877 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-05 01:00:02.989880 | orchestrator | Thursday 05 February 2026 00:59:57 +0000 (0:00:00.107) 0:02:31.672 ***** 2026-02-05 01:00:02.989884 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:00:02.989888 | orchestrator | 2026-02-05 01:00:02.989892 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-05 01:00:02.989896 | orchestrator | Thursday 05 February 2026 00:59:57 +0000 (0:00:00.419) 0:02:32.092 ***** 2026-02-05 01:00:02.989899 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:00:02.989903 | orchestrator | 2026-02-05 01:00:02.989907 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-05 01:00:02.989911 | orchestrator | Thursday 05 February 2026 01:00:01 +0000 (0:00:03.531) 0:02:35.624 ***** 2026-02-05 01:00:02.989915 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:00:02.989918 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:00:02.989922 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:00:02.989926 | orchestrator | 2026-02-05 01:00:02.989930 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:00:02.989936 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-05 01:00:02.989944 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-05 01:00:02.989948 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-05 01:00:02.989952 | orchestrator | 2026-02-05 01:00:02.989956 | orchestrator | 2026-02-05 01:00:02.989960 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:00:02.989963 | orchestrator | Thursday 05 February 2026 01:00:01 +0000 (0:00:00.375) 0:02:36.000 ***** 2026-02-05 01:00:02.989971 | orchestrator | =============================================================================== 2026-02-05 01:00:02.989975 | orchestrator | service-ks-register : keystone | Creating services --------------------- 28.40s 2026-02-05 01:00:02.989979 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.96s 2026-02-05 01:00:02.989983 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.47s 2026-02-05 01:00:02.989987 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.83s 2026-02-05 01:00:02.989991 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 10.53s 2026-02-05 01:00:02.989994 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.11s 2026-02-05 01:00:02.989998 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.44s 2026-02-05 01:00:02.990002 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.32s 2026-02-05 01:00:02.990006 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.18s 2026-02-05 01:00:02.990010 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.56s 2026-02-05 01:00:02.990056 | orchestrator | keystone : Creating default user role ----------------------------------- 3.53s 2026-02-05 01:00:02.990063 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.32s 2026-02-05 01:00:02.990073 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.32s 2026-02-05 01:00:02.990080 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.09s 2026-02-05 01:00:02.990085 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.67s 2026-02-05 01:00:02.990089 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.64s 2026-02-05 01:00:02.990093 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.44s 2026-02-05 01:00:02.990097 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.69s 2026-02-05 01:00:02.990101 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.68s 2026-02-05 01:00:02.990105 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.58s 2026-02-05 01:00:02.990108 | orchestrator | 2026-02-05 01:00:02 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:02.990112 | orchestrator | 2026-02-05 01:00:02 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:02.990116 | orchestrator | 2026-02-05 01:00:02 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:02.990511 | orchestrator | 2026-02-05 01:00:02 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:02.990531 | orchestrator | 2026-02-05 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:06.040777 | orchestrator | 2026-02-05 01:00:06 | INFO  | Task b8d5a301-b486-4de3-8ba7-3565ab07c528 is in state STARTED 2026-02-05 01:00:06.041236 | orchestrator | 2026-02-05 01:00:06 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:06.043174 | orchestrator | 2026-02-05 01:00:06 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:06.044118 | orchestrator | 2026-02-05 01:00:06 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:06.044926 | orchestrator | 2026-02-05 01:00:06 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:06.045039 | orchestrator | 2026-02-05 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:09.067377 | orchestrator | 2026-02-05 01:00:09 | INFO  | Task b8d5a301-b486-4de3-8ba7-3565ab07c528 is in state STARTED 2026-02-05 01:00:09.069084 | orchestrator | 2026-02-05 01:00:09 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:09.070678 | orchestrator | 2026-02-05 01:00:09 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:09.072485 | orchestrator | 2026-02-05 01:00:09 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:09.073989 | orchestrator | 2026-02-05 01:00:09 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:09.074117 | orchestrator | 2026-02-05 01:00:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:12.118073 | orchestrator | 2026-02-05 01:00:12 | INFO  | Task b8d5a301-b486-4de3-8ba7-3565ab07c528 is in state STARTED 2026-02-05 01:00:12.120499 | orchestrator | 2026-02-05 01:00:12 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:12.122801 | orchestrator | 2026-02-05 01:00:12 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:12.125193 | orchestrator | 2026-02-05 01:00:12 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:12.132566 | orchestrator | 2026-02-05 01:00:12 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:12.133044 | orchestrator | 2026-02-05 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:15.171070 | orchestrator | 2026-02-05 01:00:15 | INFO  | Task b8d5a301-b486-4de3-8ba7-3565ab07c528 is in state STARTED 2026-02-05 01:00:15.173109 | orchestrator | 2026-02-05 01:00:15 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:15.176971 | orchestrator | 2026-02-05 01:00:15 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:15.178977 | orchestrator | 2026-02-05 01:00:15 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:15.180606 | orchestrator | 2026-02-05 01:00:15 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:15.180672 | orchestrator | 2026-02-05 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:18.222892 | orchestrator | 2026-02-05 01:00:18 | INFO  | Task b8d5a301-b486-4de3-8ba7-3565ab07c528 is in state STARTED 2026-02-05 01:00:18.224496 | orchestrator | 2026-02-05 01:00:18 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:18.226078 | orchestrator | 2026-02-05 01:00:18 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:18.227589 | orchestrator | 2026-02-05 01:00:18 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:18.229047 | orchestrator | 2026-02-05 01:00:18 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:18.229073 | orchestrator | 2026-02-05 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:21.272080 | orchestrator | 2026-02-05 01:00:21 | INFO  | Task b8d5a301-b486-4de3-8ba7-3565ab07c528 is in state STARTED 2026-02-05 01:00:21.274341 | orchestrator | 2026-02-05 01:00:21 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:21.275876 | orchestrator | 2026-02-05 01:00:21 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:21.277438 | orchestrator | 2026-02-05 01:00:21 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:21.279039 | orchestrator | 2026-02-05 01:00:21 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:21.279087 | orchestrator | 2026-02-05 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:24.317939 | orchestrator | 2026-02-05 01:00:24 | INFO  | Task b8d5a301-b486-4de3-8ba7-3565ab07c528 is in state STARTED 2026-02-05 01:00:24.319677 | orchestrator | 2026-02-05 01:00:24 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:24.321995 | orchestrator | 2026-02-05 01:00:24 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:24.324503 | orchestrator | 2026-02-05 01:00:24 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:24.326597 | orchestrator | 2026-02-05 01:00:24 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:24.326894 | orchestrator | 2026-02-05 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:27.360082 | orchestrator | 2026-02-05 01:00:27 | INFO  | Task b8d5a301-b486-4de3-8ba7-3565ab07c528 is in state STARTED 2026-02-05 01:00:27.361863 | orchestrator | 2026-02-05 01:00:27 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:27.363680 | orchestrator | 2026-02-05 01:00:27 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:27.365302 | orchestrator | 2026-02-05 01:00:27 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:27.366831 | orchestrator | 2026-02-05 01:00:27 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:27.367010 | orchestrator | 2026-02-05 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:30.405259 | orchestrator | 2026-02-05 01:00:30 | INFO  | Task b8d5a301-b486-4de3-8ba7-3565ab07c528 is in state STARTED 2026-02-05 01:00:30.405427 | orchestrator | 2026-02-05 01:00:30 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:30.406250 | orchestrator | 2026-02-05 01:00:30 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:30.407041 | orchestrator | 2026-02-05 01:00:30 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:30.408004 | orchestrator | 2026-02-05 01:00:30 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:30.408040 | orchestrator | 2026-02-05 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:33.459714 | orchestrator | 2026-02-05 01:00:33 | INFO  | Task b8d5a301-b486-4de3-8ba7-3565ab07c528 is in state STARTED 2026-02-05 01:00:33.462919 | orchestrator | 2026-02-05 01:00:33 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:33.464086 | orchestrator | 2026-02-05 01:00:33 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:33.466085 | orchestrator | 2026-02-05 01:00:33 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:33.468413 | orchestrator | 2026-02-05 01:00:33 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:33.468460 | orchestrator | 2026-02-05 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:36.496357 | orchestrator | 2026-02-05 01:00:36 | INFO  | Task b8d5a301-b486-4de3-8ba7-3565ab07c528 is in state SUCCESS 2026-02-05 01:00:36.498110 | orchestrator | 2026-02-05 01:00:36 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:36.504005 | orchestrator | 2026-02-05 01:00:36 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:36.504063 | orchestrator | 2026-02-05 01:00:36 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:36.504088 | orchestrator | 2026-02-05 01:00:36 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:00:36.504093 | orchestrator | 2026-02-05 01:00:36 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:36.504099 | orchestrator | 2026-02-05 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:39.550268 | orchestrator | 2026-02-05 01:00:39 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:39.567199 | orchestrator | 2026-02-05 01:00:39 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:39.570203 | orchestrator | 2026-02-05 01:00:39 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:39.571600 | orchestrator | 2026-02-05 01:00:39 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:00:39.572805 | orchestrator | 2026-02-05 01:00:39 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:39.572929 | orchestrator | 2026-02-05 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:42.606300 | orchestrator | 2026-02-05 01:00:42 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:42.606515 | orchestrator | 2026-02-05 01:00:42 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:42.607806 | orchestrator | 2026-02-05 01:00:42 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:42.609070 | orchestrator | 2026-02-05 01:00:42 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:00:42.610372 | orchestrator | 2026-02-05 01:00:42 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:42.610412 | orchestrator | 2026-02-05 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:45.672690 | orchestrator | 2026-02-05 01:00:45 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:45.673593 | orchestrator | 2026-02-05 01:00:45 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:45.674735 | orchestrator | 2026-02-05 01:00:45 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:45.675323 | orchestrator | 2026-02-05 01:00:45 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:00:45.676301 | orchestrator | 2026-02-05 01:00:45 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:45.676331 | orchestrator | 2026-02-05 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:48.720631 | orchestrator | 2026-02-05 01:00:48 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:48.721055 | orchestrator | 2026-02-05 01:00:48 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:48.721720 | orchestrator | 2026-02-05 01:00:48 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:48.722428 | orchestrator | 2026-02-05 01:00:48 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:00:48.724260 | orchestrator | 2026-02-05 01:00:48 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:48.724284 | orchestrator | 2026-02-05 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:51.747678 | orchestrator | 2026-02-05 01:00:51 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:51.747867 | orchestrator | 2026-02-05 01:00:51 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:51.748586 | orchestrator | 2026-02-05 01:00:51 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:51.749194 | orchestrator | 2026-02-05 01:00:51 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:00:51.750002 | orchestrator | 2026-02-05 01:00:51 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:51.750051 | orchestrator | 2026-02-05 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:54.779777 | orchestrator | 2026-02-05 01:00:54 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:54.780262 | orchestrator | 2026-02-05 01:00:54 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:54.780674 | orchestrator | 2026-02-05 01:00:54 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:54.781470 | orchestrator | 2026-02-05 01:00:54 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:00:54.782186 | orchestrator | 2026-02-05 01:00:54 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:54.782219 | orchestrator | 2026-02-05 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:00:57.829890 | orchestrator | 2026-02-05 01:00:57 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:00:57.830292 | orchestrator | 2026-02-05 01:00:57 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:00:57.830973 | orchestrator | 2026-02-05 01:00:57 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:00:57.831686 | orchestrator | 2026-02-05 01:00:57 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:00:57.832545 | orchestrator | 2026-02-05 01:00:57 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:00:57.832630 | orchestrator | 2026-02-05 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:00.858672 | orchestrator | 2026-02-05 01:01:00 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:00.859084 | orchestrator | 2026-02-05 01:01:00 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:00.859760 | orchestrator | 2026-02-05 01:01:00 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:00.860839 | orchestrator | 2026-02-05 01:01:00 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:00.861590 | orchestrator | 2026-02-05 01:01:00 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:00.861699 | orchestrator | 2026-02-05 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:03.886178 | orchestrator | 2026-02-05 01:01:03 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:03.886844 | orchestrator | 2026-02-05 01:01:03 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:03.887293 | orchestrator | 2026-02-05 01:01:03 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:03.887818 | orchestrator | 2026-02-05 01:01:03 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:03.889031 | orchestrator | 2026-02-05 01:01:03 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:03.889093 | orchestrator | 2026-02-05 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:06.911357 | orchestrator | 2026-02-05 01:01:06 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:06.911910 | orchestrator | 2026-02-05 01:01:06 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:06.912346 | orchestrator | 2026-02-05 01:01:06 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:06.913822 | orchestrator | 2026-02-05 01:01:06 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:06.914400 | orchestrator | 2026-02-05 01:01:06 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:06.914435 | orchestrator | 2026-02-05 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:09.939791 | orchestrator | 2026-02-05 01:01:09 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:09.940009 | orchestrator | 2026-02-05 01:01:09 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:09.940620 | orchestrator | 2026-02-05 01:01:09 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:09.941124 | orchestrator | 2026-02-05 01:01:09 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:09.942269 | orchestrator | 2026-02-05 01:01:09 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:09.942340 | orchestrator | 2026-02-05 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:12.960786 | orchestrator | 2026-02-05 01:01:12 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:12.961435 | orchestrator | 2026-02-05 01:01:12 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:12.961653 | orchestrator | 2026-02-05 01:01:12 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:12.962318 | orchestrator | 2026-02-05 01:01:12 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:12.962980 | orchestrator | 2026-02-05 01:01:12 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:12.963002 | orchestrator | 2026-02-05 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:16.000532 | orchestrator | 2026-02-05 01:01:15 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:16.002955 | orchestrator | 2026-02-05 01:01:16 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:16.003382 | orchestrator | 2026-02-05 01:01:16 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:16.004158 | orchestrator | 2026-02-05 01:01:16 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:16.005020 | orchestrator | 2026-02-05 01:01:16 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:16.005055 | orchestrator | 2026-02-05 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:19.046283 | orchestrator | 2026-02-05 01:01:19 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:19.046372 | orchestrator | 2026-02-05 01:01:19 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:19.047208 | orchestrator | 2026-02-05 01:01:19 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:19.047753 | orchestrator | 2026-02-05 01:01:19 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:19.048360 | orchestrator | 2026-02-05 01:01:19 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:19.048390 | orchestrator | 2026-02-05 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:22.080822 | orchestrator | 2026-02-05 01:01:22 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:22.081512 | orchestrator | 2026-02-05 01:01:22 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:22.082223 | orchestrator | 2026-02-05 01:01:22 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:22.082740 | orchestrator | 2026-02-05 01:01:22 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:22.083455 | orchestrator | 2026-02-05 01:01:22 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:22.083481 | orchestrator | 2026-02-05 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:25.100787 | orchestrator | 2026-02-05 01:01:25 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:25.101402 | orchestrator | 2026-02-05 01:01:25 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:25.101776 | orchestrator | 2026-02-05 01:01:25 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:25.102732 | orchestrator | 2026-02-05 01:01:25 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:25.103137 | orchestrator | 2026-02-05 01:01:25 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:25.103156 | orchestrator | 2026-02-05 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:28.135424 | orchestrator | 2026-02-05 01:01:28 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:28.135849 | orchestrator | 2026-02-05 01:01:28 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:28.136410 | orchestrator | 2026-02-05 01:01:28 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:28.137086 | orchestrator | 2026-02-05 01:01:28 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:28.138107 | orchestrator | 2026-02-05 01:01:28 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:28.138131 | orchestrator | 2026-02-05 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:31.173420 | orchestrator | 2026-02-05 01:01:31 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:31.173769 | orchestrator | 2026-02-05 01:01:31 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:31.176068 | orchestrator | 2026-02-05 01:01:31 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:31.176689 | orchestrator | 2026-02-05 01:01:31 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:31.177340 | orchestrator | 2026-02-05 01:01:31 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:31.177358 | orchestrator | 2026-02-05 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:34.238925 | orchestrator | 2026-02-05 01:01:34 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:34.238984 | orchestrator | 2026-02-05 01:01:34 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:34.239011 | orchestrator | 2026-02-05 01:01:34 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:34.239019 | orchestrator | 2026-02-05 01:01:34 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:34.239025 | orchestrator | 2026-02-05 01:01:34 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:34.239033 | orchestrator | 2026-02-05 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:37.231093 | orchestrator | 2026-02-05 01:01:37 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:37.231463 | orchestrator | 2026-02-05 01:01:37 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:37.232817 | orchestrator | 2026-02-05 01:01:37 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:37.234005 | orchestrator | 2026-02-05 01:01:37 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:37.234902 | orchestrator | 2026-02-05 01:01:37 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:37.234920 | orchestrator | 2026-02-05 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:40.270296 | orchestrator | 2026-02-05 01:01:40 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:40.271265 | orchestrator | 2026-02-05 01:01:40 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:40.271452 | orchestrator | 2026-02-05 01:01:40 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:40.271469 | orchestrator | 2026-02-05 01:01:40 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:40.272016 | orchestrator | 2026-02-05 01:01:40 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:40.272161 | orchestrator | 2026-02-05 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:43.294796 | orchestrator | 2026-02-05 01:01:43 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:43.294976 | orchestrator | 2026-02-05 01:01:43 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:43.295594 | orchestrator | 2026-02-05 01:01:43 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:43.296167 | orchestrator | 2026-02-05 01:01:43 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:43.296759 | orchestrator | 2026-02-05 01:01:43 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:43.296811 | orchestrator | 2026-02-05 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:46.329377 | orchestrator | 2026-02-05 01:01:46 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:46.329618 | orchestrator | 2026-02-05 01:01:46 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:46.330100 | orchestrator | 2026-02-05 01:01:46 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:46.330723 | orchestrator | 2026-02-05 01:01:46 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:46.331225 | orchestrator | 2026-02-05 01:01:46 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:46.331242 | orchestrator | 2026-02-05 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:49.370974 | orchestrator | 2026-02-05 01:01:49 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:49.371947 | orchestrator | 2026-02-05 01:01:49 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:49.372683 | orchestrator | 2026-02-05 01:01:49 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:49.373559 | orchestrator | 2026-02-05 01:01:49 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:49.374443 | orchestrator | 2026-02-05 01:01:49 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:49.374625 | orchestrator | 2026-02-05 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:52.409525 | orchestrator | 2026-02-05 01:01:52 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:52.411452 | orchestrator | 2026-02-05 01:01:52 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:52.414871 | orchestrator | 2026-02-05 01:01:52 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:52.416694 | orchestrator | 2026-02-05 01:01:52 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:52.418542 | orchestrator | 2026-02-05 01:01:52 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:52.418706 | orchestrator | 2026-02-05 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:55.455354 | orchestrator | 2026-02-05 01:01:55 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:55.456790 | orchestrator | 2026-02-05 01:01:55 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:55.457418 | orchestrator | 2026-02-05 01:01:55 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:55.458120 | orchestrator | 2026-02-05 01:01:55 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:55.458920 | orchestrator | 2026-02-05 01:01:55 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:55.458960 | orchestrator | 2026-02-05 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:01:58.494771 | orchestrator | 2026-02-05 01:01:58 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:01:58.495090 | orchestrator | 2026-02-05 01:01:58 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:01:58.496020 | orchestrator | 2026-02-05 01:01:58 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:01:58.496677 | orchestrator | 2026-02-05 01:01:58 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:01:58.497261 | orchestrator | 2026-02-05 01:01:58 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:01:58.497285 | orchestrator | 2026-02-05 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:01.519831 | orchestrator | 2026-02-05 01:02:01 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:02:01.520734 | orchestrator | 2026-02-05 01:02:01 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:01.521288 | orchestrator | 2026-02-05 01:02:01 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:01.521971 | orchestrator | 2026-02-05 01:02:01 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:02:01.524387 | orchestrator | 2026-02-05 01:02:01 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:01.524425 | orchestrator | 2026-02-05 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:04.554700 | orchestrator | 2026-02-05 01:02:04 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:02:04.555922 | orchestrator | 2026-02-05 01:02:04 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:04.556621 | orchestrator | 2026-02-05 01:02:04 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:04.557302 | orchestrator | 2026-02-05 01:02:04 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:02:04.558510 | orchestrator | 2026-02-05 01:02:04 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:04.558536 | orchestrator | 2026-02-05 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:07.583790 | orchestrator | 2026-02-05 01:02:07 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state STARTED 2026-02-05 01:02:07.584789 | orchestrator | 2026-02-05 01:02:07 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:07.585049 | orchestrator | 2026-02-05 01:02:07 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:07.585981 | orchestrator | 2026-02-05 01:02:07 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:02:07.587822 | orchestrator | 2026-02-05 01:02:07 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:07.587904 | orchestrator | 2026-02-05 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:10.619093 | orchestrator | 2026-02-05 01:02:10 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:02:10.620818 | orchestrator | 2026-02-05 01:02:10 | INFO  | Task 6118fa7c-c891-4aec-bbfd-6c303bb44c1b is in state SUCCESS 2026-02-05 01:02:10.622079 | orchestrator | 2026-02-05 01:02:10.622115 | orchestrator | 2026-02-05 01:02:10.622122 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-05 01:02:10.622150 | orchestrator | 2026-02-05 01:02:10.622155 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-05 01:02:10.622160 | orchestrator | Thursday 05 February 2026 00:59:45 +0000 (0:00:00.170) 0:00:00.170 ***** 2026-02-05 01:02:10.622169 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-05 01:02:10.622174 | orchestrator | 2026-02-05 01:02:10.622179 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-05 01:02:10.622185 | orchestrator | Thursday 05 February 2026 00:59:45 +0000 (0:00:00.166) 0:00:00.337 ***** 2026-02-05 01:02:10.622193 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-05 01:02:10.622198 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-05 01:02:10.622202 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-05 01:02:10.622207 | orchestrator | 2026-02-05 01:02:10.622212 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-05 01:02:10.622216 | orchestrator | Thursday 05 February 2026 00:59:46 +0000 (0:00:01.136) 0:00:01.473 ***** 2026-02-05 01:02:10.622221 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-05 01:02:10.622225 | orchestrator | 2026-02-05 01:02:10.622230 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-05 01:02:10.622234 | orchestrator | Thursday 05 February 2026 00:59:47 +0000 (0:00:01.129) 0:00:02.603 ***** 2026-02-05 01:02:10.622253 | orchestrator | changed: [testbed-manager] 2026-02-05 01:02:10.622257 | orchestrator | 2026-02-05 01:02:10.622262 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-05 01:02:10.622266 | orchestrator | Thursday 05 February 2026 00:59:48 +0000 (0:00:00.824) 0:00:03.428 ***** 2026-02-05 01:02:10.622270 | orchestrator | changed: [testbed-manager] 2026-02-05 01:02:10.622274 | orchestrator | 2026-02-05 01:02:10.622279 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-05 01:02:10.622283 | orchestrator | Thursday 05 February 2026 00:59:49 +0000 (0:00:00.747) 0:00:04.175 ***** 2026-02-05 01:02:10.622287 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-05 01:02:10.622292 | orchestrator | ok: [testbed-manager] 2026-02-05 01:02:10.622296 | orchestrator | 2026-02-05 01:02:10.622300 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-05 01:02:10.622304 | orchestrator | Thursday 05 February 2026 01:00:26 +0000 (0:00:37.052) 0:00:41.228 ***** 2026-02-05 01:02:10.622309 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-05 01:02:10.622314 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-05 01:02:10.622318 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-05 01:02:10.622322 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-05 01:02:10.622327 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-05 01:02:10.622331 | orchestrator | 2026-02-05 01:02:10.622349 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-05 01:02:10.622354 | orchestrator | Thursday 05 February 2026 01:00:29 +0000 (0:00:03.596) 0:00:44.825 ***** 2026-02-05 01:02:10.622358 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-05 01:02:10.622363 | orchestrator | 2026-02-05 01:02:10.622368 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-05 01:02:10.622375 | orchestrator | Thursday 05 February 2026 01:00:30 +0000 (0:00:00.416) 0:00:45.242 ***** 2026-02-05 01:02:10.622381 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:02:10.622385 | orchestrator | 2026-02-05 01:02:10.622397 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-05 01:02:10.622401 | orchestrator | Thursday 05 February 2026 01:00:30 +0000 (0:00:00.119) 0:00:45.361 ***** 2026-02-05 01:02:10.622406 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:02:10.622442 | orchestrator | 2026-02-05 01:02:10.622448 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-05 01:02:10.622456 | orchestrator | Thursday 05 February 2026 01:00:30 +0000 (0:00:00.416) 0:00:45.777 ***** 2026-02-05 01:02:10.622463 | orchestrator | changed: [testbed-manager] 2026-02-05 01:02:10.622470 | orchestrator | 2026-02-05 01:02:10.622477 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-05 01:02:10.622483 | orchestrator | Thursday 05 February 2026 01:00:31 +0000 (0:00:01.365) 0:00:47.143 ***** 2026-02-05 01:02:10.622491 | orchestrator | changed: [testbed-manager] 2026-02-05 01:02:10.622497 | orchestrator | 2026-02-05 01:02:10.622504 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-05 01:02:10.622511 | orchestrator | Thursday 05 February 2026 01:00:32 +0000 (0:00:00.729) 0:00:47.873 ***** 2026-02-05 01:02:10.622517 | orchestrator | changed: [testbed-manager] 2026-02-05 01:02:10.622524 | orchestrator | 2026-02-05 01:02:10.622531 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-05 01:02:10.622538 | orchestrator | Thursday 05 February 2026 01:00:33 +0000 (0:00:00.534) 0:00:48.407 ***** 2026-02-05 01:02:10.622546 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-05 01:02:10.622553 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-05 01:02:10.622596 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-05 01:02:10.622603 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-05 01:02:10.622608 | orchestrator | 2026-02-05 01:02:10.622612 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:02:10.622623 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-05 01:02:10.622628 | orchestrator | 2026-02-05 01:02:10.622634 | orchestrator | 2026-02-05 01:02:10.622651 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:02:10.622660 | orchestrator | Thursday 05 February 2026 01:00:34 +0000 (0:00:01.272) 0:00:49.679 ***** 2026-02-05 01:02:10.622668 | orchestrator | =============================================================================== 2026-02-05 01:02:10.622676 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.05s 2026-02-05 01:02:10.622684 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.60s 2026-02-05 01:02:10.622692 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.37s 2026-02-05 01:02:10.622698 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.27s 2026-02-05 01:02:10.622703 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.14s 2026-02-05 01:02:10.622708 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.13s 2026-02-05 01:02:10.622713 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.83s 2026-02-05 01:02:10.622720 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.75s 2026-02-05 01:02:10.622727 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.73s 2026-02-05 01:02:10.622735 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.53s 2026-02-05 01:02:10.622742 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.42s 2026-02-05 01:02:10.622750 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.42s 2026-02-05 01:02:10.622757 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.17s 2026-02-05 01:02:10.622764 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2026-02-05 01:02:10.622772 | orchestrator | 2026-02-05 01:02:10.622779 | orchestrator | 2026-02-05 01:02:10.622787 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:02:10.622795 | orchestrator | 2026-02-05 01:02:10.622803 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:02:10.622811 | orchestrator | Thursday 05 February 2026 01:00:06 +0000 (0:00:00.205) 0:00:00.205 ***** 2026-02-05 01:02:10.622819 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:02:10.622827 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:02:10.622832 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:02:10.622837 | orchestrator | 2026-02-05 01:02:10.622843 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:02:10.622848 | orchestrator | Thursday 05 February 2026 01:00:07 +0000 (0:00:00.267) 0:00:00.473 ***** 2026-02-05 01:02:10.622853 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-05 01:02:10.622858 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-05 01:02:10.622864 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-05 01:02:10.622869 | orchestrator | 2026-02-05 01:02:10.622874 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-05 01:02:10.622879 | orchestrator | 2026-02-05 01:02:10.622885 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-05 01:02:10.622890 | orchestrator | Thursday 05 February 2026 01:00:07 +0000 (0:00:00.379) 0:00:00.852 ***** 2026-02-05 01:02:10.622895 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:02:10.622901 | orchestrator | 2026-02-05 01:02:10.622915 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-05 01:02:10.622925 | orchestrator | Thursday 05 February 2026 01:00:08 +0000 (0:00:00.417) 0:00:01.269 ***** 2026-02-05 01:02:10.622930 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-05 01:02:10.622940 | orchestrator | 2026-02-05 01:02:10.622945 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-05 01:02:10.622954 | orchestrator | Thursday 05 February 2026 01:00:11 +0000 (0:00:03.543) 0:00:04.813 ***** 2026-02-05 01:02:10.622959 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-05 01:02:10.622965 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-05 01:02:10.622970 | orchestrator | 2026-02-05 01:02:10.622976 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-05 01:02:10.622981 | orchestrator | Thursday 05 February 2026 01:00:17 +0000 (0:00:06.440) 0:00:11.253 ***** 2026-02-05 01:02:10.622986 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-05 01:02:10.622992 | orchestrator | 2026-02-05 01:02:10.622997 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-05 01:02:10.623002 | orchestrator | Thursday 05 February 2026 01:00:22 +0000 (0:00:04.517) 0:00:15.771 ***** 2026-02-05 01:02:10.623006 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-05 01:02:10.623011 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:02:10.623015 | orchestrator | 2026-02-05 01:02:10.623020 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-05 01:02:10.623024 | orchestrator | Thursday 05 February 2026 01:00:26 +0000 (0:00:03.888) 0:00:19.660 ***** 2026-02-05 01:02:10.623028 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 01:02:10.623033 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-05 01:02:10.623037 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-05 01:02:10.623042 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-05 01:02:10.623046 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-05 01:02:10.623050 | orchestrator | 2026-02-05 01:02:10.623055 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-05 01:02:10.623059 | orchestrator | Thursday 05 February 2026 01:00:44 +0000 (0:00:18.364) 0:00:38.025 ***** 2026-02-05 01:02:10.623087 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-05 01:02:10.623093 | orchestrator | 2026-02-05 01:02:10.623097 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-05 01:02:10.623102 | orchestrator | Thursday 05 February 2026 01:00:48 +0000 (0:00:04.070) 0:00:42.095 ***** 2026-02-05 01:02:10.623108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:10.623115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:10.623169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:10.623177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.623188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.623193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.623198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.623204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.623212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.623216 | orchestrator | 2026-02-05 01:02:10.623223 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-05 01:02:10.623235 | orchestrator | Thursday 05 February 2026 01:00:50 +0000 (0:00:01.772) 0:00:43.867 ***** 2026-02-05 01:02:10.623244 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-05 01:02:10.623249 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-05 01:02:10.623254 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-05 01:02:10.623258 | orchestrator | 2026-02-05 01:02:10.623262 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-05 01:02:10.623267 | orchestrator | Thursday 05 February 2026 01:00:51 +0000 (0:00:01.317) 0:00:45.185 ***** 2026-02-05 01:02:10.623271 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:02:10.623276 | orchestrator | 2026-02-05 01:02:10.623280 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-05 01:02:10.623284 | orchestrator | Thursday 05 February 2026 01:00:52 +0000 (0:00:00.121) 0:00:45.307 ***** 2026-02-05 01:02:10.623289 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:02:10.623293 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:02:10.623298 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:02:10.623302 | orchestrator | 2026-02-05 01:02:10.623307 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-05 01:02:10.623311 | orchestrator | Thursday 05 February 2026 01:00:52 +0000 (0:00:00.377) 0:00:45.684 ***** 2026-02-05 01:02:10.623315 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:02:10.623320 | orchestrator | 2026-02-05 01:02:10.623324 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-05 01:02:10.623329 | orchestrator | Thursday 05 February 2026 01:00:52 +0000 (0:00:00.468) 0:00:46.152 ***** 2026-02-05 01:02:10.623337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:10.623345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:10.623350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:10.623357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.623362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.623370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.623375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.623382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.623387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.623391 | orchestrator | 2026-02-05 01:02:10.623396 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-05 01:02:10.623401 | orchestrator | Thursday 05 February 2026 01:00:56 +0000 (0:00:03.643) 0:00:49.795 ***** 2026-02-05 01:02:10.623408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 01:02:10.623416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 01:02:10.623428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:02:10.623439 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:02:10.623446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 01:02:10.623453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 01:02:10.623459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:02:10.623472 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:02:10.623480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 01:02:10.623491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 01:02:10.623502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:02:10.623509 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:02:10.623515 | orchestrator | 2026-02-05 01:02:10.623522 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-05 01:02:10.623529 | orchestrator | Thursday 05 February 2026 01:00:58 +0000 (0:00:01.744) 0:00:51.540 ***** 2026-02-05 01:02:10.623536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 01:02:10.623544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 01:02:10.623554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:02:10.623576 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:02:10.623584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 01:02:10.623821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 01:02:10.623838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:02:10.623846 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:02:10.623851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 01:02:10.623861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 01:02:10.623868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:02:10.623873 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:02:10.623877 | orchestrator | 2026-02-05 01:02:10.623882 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-05 01:02:10.623886 | orchestrator | Thursday 05 February 2026 01:00:59 +0000 (0:00:01.290) 0:00:52.831 ***** 2026-02-05 01:02:10.623903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:10.623908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:10.623913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:10.623919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.623924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.623934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.623939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.623944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.623949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.623953 | orchestrator | 2026-02-05 01:02:10.623958 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-05 01:02:10.623962 | orchestrator | Thursday 05 February 2026 01:01:03 +0000 (0:00:04.242) 0:00:57.073 ***** 2026-02-05 01:02:10.623967 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:02:10.623971 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:02:10.623976 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:02:10.623980 | orchestrator | 2026-02-05 01:02:10.623985 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-05 01:02:10.623989 | orchestrator | Thursday 05 February 2026 01:01:05 +0000 (0:00:01.707) 0:00:58.781 ***** 2026-02-05 01:02:10.623993 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:02:10.623998 | orchestrator | 2026-02-05 01:02:10.624002 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-05 01:02:10.624007 | orchestrator | Thursday 05 February 2026 01:01:07 +0000 (0:00:01.686) 0:01:00.467 ***** 2026-02-05 01:02:10.624011 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:02:10.624015 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:02:10.624022 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:02:10.624026 | orchestrator | 2026-02-05 01:02:10.624030 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-05 01:02:10.624035 | orchestrator | Thursday 05 February 2026 01:01:07 +0000 (0:00:00.484) 0:01:00.952 ***** 2026-02-05 01:02:10.624043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:10.624051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:10.624056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:10.624061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.624067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.624075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.624080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.624087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.624092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.624096 | orchestrator | 2026-02-05 01:02:10.624101 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-05 01:02:10.624105 | orchestrator | Thursday 05 February 2026 01:01:16 +0000 (0:00:08.965) 0:01:09.918 ***** 2026-02-05 01:02:10.624110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 01:02:10.624117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 01:02:10.624124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:02:10.624129 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:02:10.624136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 01:02:10.624141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 01:02:10.624145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:02:10.624150 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:02:10.624154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-05 01:02:10.624166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-05 01:02:10.624171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:02:10.624175 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:02:10.624180 | orchestrator | 2026-02-05 01:02:10.624184 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-05 01:02:10.624189 | orchestrator | Thursday 05 February 2026 01:01:18 +0000 (0:00:01.435) 0:01:11.353 ***** 2026-02-05 01:02:10.624196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:10.624201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:10.624206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-05 01:02:10.624213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.624218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.624225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.624230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.624234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.624285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:02:10.624314 | orchestrator | 2026-02-05 01:02:10.624325 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-05 01:02:10.624336 | orchestrator | Thursday 05 February 2026 01:01:22 +0000 (0:00:04.306) 0:01:15.660 ***** 2026-02-05 01:02:10.624343 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:02:10.624349 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:02:10.624356 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:02:10.624362 | orchestrator | 2026-02-05 01:02:10.624369 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-05 01:02:10.624388 | orchestrator | Thursday 05 February 2026 01:01:22 +0000 (0:00:00.552) 0:01:16.212 ***** 2026-02-05 01:02:10.624396 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:02:10.624402 | orchestrator | 2026-02-05 01:02:10.624409 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-05 01:02:10.624415 | orchestrator | Thursday 05 February 2026 01:01:25 +0000 (0:00:02.177) 0:01:18.389 ***** 2026-02-05 01:02:10.624422 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:02:10.624428 | orchestrator | 2026-02-05 01:02:10.624435 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-05 01:02:10.624441 | orchestrator | Thursday 05 February 2026 01:01:27 +0000 (0:00:02.662) 0:01:21.052 ***** 2026-02-05 01:02:10.624449 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:02:10.624455 | orchestrator | 2026-02-05 01:02:10.624462 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-05 01:02:10.624469 | orchestrator | Thursday 05 February 2026 01:01:39 +0000 (0:00:11.546) 0:01:32.598 ***** 2026-02-05 01:02:10.624476 | orchestrator | 2026-02-05 01:02:10.624483 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-05 01:02:10.624491 | orchestrator | Thursday 05 February 2026 01:01:39 +0000 (0:00:00.132) 0:01:32.730 ***** 2026-02-05 01:02:10.624498 | orchestrator | 2026-02-05 01:02:10.624505 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-05 01:02:10.624512 | orchestrator | Thursday 05 February 2026 01:01:39 +0000 (0:00:00.077) 0:01:32.808 ***** 2026-02-05 01:02:10.624520 | orchestrator | 2026-02-05 01:02:10.624527 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-05 01:02:10.624535 | orchestrator | Thursday 05 February 2026 01:01:39 +0000 (0:00:00.078) 0:01:32.886 ***** 2026-02-05 01:02:10.624541 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:02:10.624548 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:02:10.624554 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:02:10.624582 | orchestrator | 2026-02-05 01:02:10.624590 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-05 01:02:10.624597 | orchestrator | Thursday 05 February 2026 01:01:48 +0000 (0:00:08.470) 0:01:41.357 ***** 2026-02-05 01:02:10.624611 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:02:10.624619 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:02:10.624625 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:02:10.624632 | orchestrator | 2026-02-05 01:02:10.624640 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-05 01:02:10.624647 | orchestrator | Thursday 05 February 2026 01:01:57 +0000 (0:00:09.688) 0:01:51.045 ***** 2026-02-05 01:02:10.624655 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:02:10.624662 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:02:10.624671 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:02:10.624678 | orchestrator | 2026-02-05 01:02:10.624686 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:02:10.624738 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:02:10.624749 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 01:02:10.624757 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 01:02:10.624764 | orchestrator | 2026-02-05 01:02:10.624772 | orchestrator | 2026-02-05 01:02:10.624802 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:02:10.624810 | orchestrator | Thursday 05 February 2026 01:02:08 +0000 (0:00:10.829) 0:02:01.874 ***** 2026-02-05 01:02:10.624817 | orchestrator | =============================================================================== 2026-02-05 01:02:10.624825 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 18.36s 2026-02-05 01:02:10.624832 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.55s 2026-02-05 01:02:10.624840 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.82s 2026-02-05 01:02:10.624847 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.69s 2026-02-05 01:02:10.624854 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.97s 2026-02-05 01:02:10.624862 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.47s 2026-02-05 01:02:10.624888 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.44s 2026-02-05 01:02:10.624896 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 4.52s 2026-02-05 01:02:10.624903 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.31s 2026-02-05 01:02:10.624910 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.24s 2026-02-05 01:02:10.624918 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.07s 2026-02-05 01:02:10.624925 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.89s 2026-02-05 01:02:10.624932 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.64s 2026-02-05 01:02:10.624939 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.54s 2026-02-05 01:02:10.624946 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.66s 2026-02-05 01:02:10.624953 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.18s 2026-02-05 01:02:10.624959 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.77s 2026-02-05 01:02:10.624966 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.74s 2026-02-05 01:02:10.624978 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.71s 2026-02-05 01:02:10.624986 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.69s 2026-02-05 01:02:10.624993 | orchestrator | 2026-02-05 01:02:10 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:10.625011 | orchestrator | 2026-02-05 01:02:10 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:10.625019 | orchestrator | 2026-02-05 01:02:10 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:02:10.625124 | orchestrator | 2026-02-05 01:02:10 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:10.625137 | orchestrator | 2026-02-05 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:13.651645 | orchestrator | 2026-02-05 01:02:13 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:02:13.652583 | orchestrator | 2026-02-05 01:02:13 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:13.652638 | orchestrator | 2026-02-05 01:02:13 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:13.652759 | orchestrator | 2026-02-05 01:02:13 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state STARTED 2026-02-05 01:02:13.653309 | orchestrator | 2026-02-05 01:02:13 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:13.654519 | orchestrator | 2026-02-05 01:02:13 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:16.680416 | orchestrator | 2026-02-05 01:02:16 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:02:16.682082 | orchestrator | 2026-02-05 01:02:16 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:16.683988 | orchestrator | 2026-02-05 01:02:16 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:16.684984 | orchestrator | 2026-02-05 01:02:16 | INFO  | Task 4ec0488d-09b9-4914-acd3-3dab9393698d is in state SUCCESS 2026-02-05 01:02:16.686853 | orchestrator | 2026-02-05 01:02:16 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:16.686880 | orchestrator | 2026-02-05 01:02:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:19.736250 | orchestrator | 2026-02-05 01:02:19 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:02:19.738883 | orchestrator | 2026-02-05 01:02:19 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:19.740762 | orchestrator | 2026-02-05 01:02:19 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:19.742727 | orchestrator | 2026-02-05 01:02:19 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:19.743155 | orchestrator | 2026-02-05 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:22.796712 | orchestrator | 2026-02-05 01:02:22 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:02:22.798516 | orchestrator | 2026-02-05 01:02:22 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:22.800956 | orchestrator | 2026-02-05 01:02:22 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:22.802878 | orchestrator | 2026-02-05 01:02:22 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:22.803118 | orchestrator | 2026-02-05 01:02:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:25.844876 | orchestrator | 2026-02-05 01:02:25 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:02:25.847914 | orchestrator | 2026-02-05 01:02:25 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:25.849864 | orchestrator | 2026-02-05 01:02:25 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:25.852056 | orchestrator | 2026-02-05 01:02:25 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:25.852126 | orchestrator | 2026-02-05 01:02:25 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:28.887252 | orchestrator | 2026-02-05 01:02:28 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:02:28.890125 | orchestrator | 2026-02-05 01:02:28 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:28.890403 | orchestrator | 2026-02-05 01:02:28 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:28.891150 | orchestrator | 2026-02-05 01:02:28 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:28.891208 | orchestrator | 2026-02-05 01:02:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:31.936038 | orchestrator | 2026-02-05 01:02:31 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:02:31.937939 | orchestrator | 2026-02-05 01:02:31 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:31.939436 | orchestrator | 2026-02-05 01:02:31 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:31.941052 | orchestrator | 2026-02-05 01:02:31 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:31.941174 | orchestrator | 2026-02-05 01:02:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:34.983054 | orchestrator | 2026-02-05 01:02:34 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:02:34.985817 | orchestrator | 2026-02-05 01:02:34 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:34.987825 | orchestrator | 2026-02-05 01:02:34 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:34.989337 | orchestrator | 2026-02-05 01:02:34 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:34.989470 | orchestrator | 2026-02-05 01:02:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:38.023750 | orchestrator | 2026-02-05 01:02:38 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:02:38.024418 | orchestrator | 2026-02-05 01:02:38 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:38.025260 | orchestrator | 2026-02-05 01:02:38 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:38.026234 | orchestrator | 2026-02-05 01:02:38 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:38.026316 | orchestrator | 2026-02-05 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:41.066824 | orchestrator | 2026-02-05 01:02:41 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:02:41.068948 | orchestrator | 2026-02-05 01:02:41 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:41.070783 | orchestrator | 2026-02-05 01:02:41 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:41.072539 | orchestrator | 2026-02-05 01:02:41 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:41.072656 | orchestrator | 2026-02-05 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:44.103217 | orchestrator | 2026-02-05 01:02:44 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:02:44.105061 | orchestrator | 2026-02-05 01:02:44 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:44.106930 | orchestrator | 2026-02-05 01:02:44 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:44.110133 | orchestrator | 2026-02-05 01:02:44 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:44.110526 | orchestrator | 2026-02-05 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:47.148845 | orchestrator | 2026-02-05 01:02:47 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:02:47.148916 | orchestrator | 2026-02-05 01:02:47 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:47.150189 | orchestrator | 2026-02-05 01:02:47 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:47.150210 | orchestrator | 2026-02-05 01:02:47 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:47.150216 | orchestrator | 2026-02-05 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:50.183700 | orchestrator | 2026-02-05 01:02:50 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:02:50.186053 | orchestrator | 2026-02-05 01:02:50 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:50.188841 | orchestrator | 2026-02-05 01:02:50 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:50.190898 | orchestrator | 2026-02-05 01:02:50 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:50.190948 | orchestrator | 2026-02-05 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:53.226610 | orchestrator | 2026-02-05 01:02:53 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:02:53.229033 | orchestrator | 2026-02-05 01:02:53 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:53.230501 | orchestrator | 2026-02-05 01:02:53 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:53.231894 | orchestrator | 2026-02-05 01:02:53 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:53.231929 | orchestrator | 2026-02-05 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:56.279752 | orchestrator | 2026-02-05 01:02:56 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:02:56.284799 | orchestrator | 2026-02-05 01:02:56 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:56.287608 | orchestrator | 2026-02-05 01:02:56 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:56.289034 | orchestrator | 2026-02-05 01:02:56 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:56.289266 | orchestrator | 2026-02-05 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:02:59.334276 | orchestrator | 2026-02-05 01:02:59 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:02:59.336285 | orchestrator | 2026-02-05 01:02:59 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state STARTED 2026-02-05 01:02:59.339714 | orchestrator | 2026-02-05 01:02:59 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:02:59.343393 | orchestrator | 2026-02-05 01:02:59 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:02:59.343463 | orchestrator | 2026-02-05 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:02.377842 | orchestrator | 2026-02-05 01:03:02 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:02.378842 | orchestrator | 2026-02-05 01:03:02 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:03:02.380750 | orchestrator | 2026-02-05 01:03:02 | INFO  | Task 59169a28-965c-4b3b-af98-461e13af1577 is in state SUCCESS 2026-02-05 01:03:02.381804 | orchestrator | 2026-02-05 01:03:02.381832 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-05 01:03:02.381840 | orchestrator | 2.16.14 2026-02-05 01:03:02.381861 | orchestrator | 2026-02-05 01:03:02.381867 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-05 01:03:02.381872 | orchestrator | 2026-02-05 01:03:02.381878 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-05 01:03:02.381883 | orchestrator | Thursday 05 February 2026 01:00:38 +0000 (0:00:00.201) 0:00:00.202 ***** 2026-02-05 01:03:02.381889 | orchestrator | changed: [testbed-manager] 2026-02-05 01:03:02.381894 | orchestrator | 2026-02-05 01:03:02.381900 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-05 01:03:02.381905 | orchestrator | Thursday 05 February 2026 01:00:40 +0000 (0:00:02.111) 0:00:02.313 ***** 2026-02-05 01:03:02.381910 | orchestrator | changed: [testbed-manager] 2026-02-05 01:03:02.381916 | orchestrator | 2026-02-05 01:03:02.381922 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-05 01:03:02.381927 | orchestrator | Thursday 05 February 2026 01:00:41 +0000 (0:00:00.913) 0:00:03.226 ***** 2026-02-05 01:03:02.381933 | orchestrator | changed: [testbed-manager] 2026-02-05 01:03:02.381938 | orchestrator | 2026-02-05 01:03:02.381943 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-05 01:03:02.381949 | orchestrator | Thursday 05 February 2026 01:00:42 +0000 (0:00:00.996) 0:00:04.223 ***** 2026-02-05 01:03:02.381953 | orchestrator | changed: [testbed-manager] 2026-02-05 01:03:02.381959 | orchestrator | 2026-02-05 01:03:02.381964 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-05 01:03:02.381969 | orchestrator | Thursday 05 February 2026 01:00:43 +0000 (0:00:01.292) 0:00:05.515 ***** 2026-02-05 01:03:02.381974 | orchestrator | changed: [testbed-manager] 2026-02-05 01:03:02.381979 | orchestrator | 2026-02-05 01:03:02.381985 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-05 01:03:02.381990 | orchestrator | Thursday 05 February 2026 01:00:44 +0000 (0:00:00.957) 0:00:06.472 ***** 2026-02-05 01:03:02.381994 | orchestrator | changed: [testbed-manager] 2026-02-05 01:03:02.381999 | orchestrator | 2026-02-05 01:03:02.382005 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-05 01:03:02.382010 | orchestrator | Thursday 05 February 2026 01:00:45 +0000 (0:00:00.903) 0:00:07.376 ***** 2026-02-05 01:03:02.382045 | orchestrator | changed: [testbed-manager] 2026-02-05 01:03:02.382051 | orchestrator | 2026-02-05 01:03:02.382057 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-05 01:03:02.382063 | orchestrator | Thursday 05 February 2026 01:00:47 +0000 (0:00:01.168) 0:00:08.545 ***** 2026-02-05 01:03:02.382117 | orchestrator | changed: [testbed-manager] 2026-02-05 01:03:02.382127 | orchestrator | 2026-02-05 01:03:02.382132 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-05 01:03:02.382138 | orchestrator | Thursday 05 February 2026 01:00:47 +0000 (0:00:00.957) 0:00:09.502 ***** 2026-02-05 01:03:02.382145 | orchestrator | changed: [testbed-manager] 2026-02-05 01:03:02.382150 | orchestrator | 2026-02-05 01:03:02.382187 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-05 01:03:02.382193 | orchestrator | Thursday 05 February 2026 01:01:51 +0000 (0:01:03.218) 0:01:12.721 ***** 2026-02-05 01:03:02.382198 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:03:02.382203 | orchestrator | 2026-02-05 01:03:02.382209 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-05 01:03:02.382240 | orchestrator | 2026-02-05 01:03:02.382247 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-05 01:03:02.382279 | orchestrator | Thursday 05 February 2026 01:01:51 +0000 (0:00:00.136) 0:01:12.857 ***** 2026-02-05 01:03:02.382285 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:02.382291 | orchestrator | 2026-02-05 01:03:02.382297 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-05 01:03:02.382303 | orchestrator | 2026-02-05 01:03:02.382309 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-05 01:03:02.382315 | orchestrator | Thursday 05 February 2026 01:02:03 +0000 (0:00:11.751) 0:01:24.608 ***** 2026-02-05 01:03:02.382328 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:03:02.382335 | orchestrator | 2026-02-05 01:03:02.382341 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-05 01:03:02.382347 | orchestrator | 2026-02-05 01:03:02.382353 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-05 01:03:02.382420 | orchestrator | Thursday 05 February 2026 01:02:04 +0000 (0:00:01.310) 0:01:25.919 ***** 2026-02-05 01:03:02.382428 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:03:02.382434 | orchestrator | 2026-02-05 01:03:02.382441 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:03:02.382448 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-05 01:03:02.382456 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:03:02.382464 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:03:02.382471 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:03:02.382479 | orchestrator | 2026-02-05 01:03:02.382486 | orchestrator | 2026-02-05 01:03:02.382493 | orchestrator | 2026-02-05 01:03:02.382499 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:03:02.382505 | orchestrator | Thursday 05 February 2026 01:02:15 +0000 (0:00:11.108) 0:01:37.027 ***** 2026-02-05 01:03:02.382511 | orchestrator | =============================================================================== 2026-02-05 01:03:02.382517 | orchestrator | Create admin user ------------------------------------------------------ 63.22s 2026-02-05 01:03:02.382559 | orchestrator | Restart ceph manager service ------------------------------------------- 24.17s 2026-02-05 01:03:02.382567 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.11s 2026-02-05 01:03:02.382573 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.29s 2026-02-05 01:03:02.382579 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.17s 2026-02-05 01:03:02.382585 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.00s 2026-02-05 01:03:02.382590 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.96s 2026-02-05 01:03:02.382596 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.96s 2026-02-05 01:03:02.382601 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.91s 2026-02-05 01:03:02.382607 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.90s 2026-02-05 01:03:02.382613 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2026-02-05 01:03:02.382619 | orchestrator | 2026-02-05 01:03:02.382626 | orchestrator | 2026-02-05 01:03:02.382632 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:03:02.382637 | orchestrator | 2026-02-05 01:03:02.382643 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:03:02.382649 | orchestrator | Thursday 05 February 2026 01:00:07 +0000 (0:00:00.263) 0:00:00.263 ***** 2026-02-05 01:03:02.382654 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:03:02.382660 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:03:02.382666 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:03:02.382672 | orchestrator | 2026-02-05 01:03:02.382678 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:03:02.382683 | orchestrator | Thursday 05 February 2026 01:00:07 +0000 (0:00:00.311) 0:00:00.574 ***** 2026-02-05 01:03:02.382690 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-05 01:03:02.382696 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-05 01:03:02.382707 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-05 01:03:02.382713 | orchestrator | 2026-02-05 01:03:02.382718 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-05 01:03:02.382724 | orchestrator | 2026-02-05 01:03:02.382729 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-05 01:03:02.382734 | orchestrator | Thursday 05 February 2026 01:00:08 +0000 (0:00:00.424) 0:00:00.999 ***** 2026-02-05 01:03:02.382744 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:03:02.382750 | orchestrator | 2026-02-05 01:03:02.382755 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-05 01:03:02.382761 | orchestrator | Thursday 05 February 2026 01:00:08 +0000 (0:00:00.497) 0:00:01.497 ***** 2026-02-05 01:03:02.382767 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-05 01:03:02.382772 | orchestrator | 2026-02-05 01:03:02.382777 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-05 01:03:02.382783 | orchestrator | Thursday 05 February 2026 01:00:12 +0000 (0:00:03.700) 0:00:05.197 ***** 2026-02-05 01:03:02.382788 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-05 01:03:02.382794 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-05 01:03:02.382823 | orchestrator | 2026-02-05 01:03:02.382829 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-05 01:03:02.382834 | orchestrator | Thursday 05 February 2026 01:00:19 +0000 (0:00:07.101) 0:00:12.299 ***** 2026-02-05 01:03:02.382840 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 01:03:02.382846 | orchestrator | 2026-02-05 01:03:02.382851 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-05 01:03:02.382857 | orchestrator | Thursday 05 February 2026 01:00:23 +0000 (0:00:03.733) 0:00:16.032 ***** 2026-02-05 01:03:02.382863 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-05 01:03:02.382868 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:03:02.382922 | orchestrator | 2026-02-05 01:03:02.382929 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-05 01:03:02.382934 | orchestrator | Thursday 05 February 2026 01:00:26 +0000 (0:00:03.796) 0:00:19.829 ***** 2026-02-05 01:03:02.382940 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 01:03:02.382946 | orchestrator | 2026-02-05 01:03:02.382952 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-05 01:03:02.382958 | orchestrator | Thursday 05 February 2026 01:00:30 +0000 (0:00:03.552) 0:00:23.382 ***** 2026-02-05 01:03:02.382964 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-05 01:03:02.382969 | orchestrator | 2026-02-05 01:03:02.382975 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-05 01:03:02.382981 | orchestrator | Thursday 05 February 2026 01:00:34 +0000 (0:00:03.922) 0:00:27.304 ***** 2026-02-05 01:03:02.382999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:03:02.383014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:03:02.383025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:03:02.383032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383145 | orchestrator | 2026-02-05 01:03:02.383151 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-05 01:03:02.383157 | orchestrator | Thursday 05 February 2026 01:00:37 +0000 (0:00:03.062) 0:00:30.366 ***** 2026-02-05 01:03:02.383162 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:02.383167 | orchestrator | 2026-02-05 01:03:02.383173 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-05 01:03:02.383178 | orchestrator | Thursday 05 February 2026 01:00:37 +0000 (0:00:00.159) 0:00:30.526 ***** 2026-02-05 01:03:02.383184 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:02.383190 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:02.383196 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:02.383202 | orchestrator | 2026-02-05 01:03:02.383208 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-05 01:03:02.383214 | orchestrator | Thursday 05 February 2026 01:00:37 +0000 (0:00:00.262) 0:00:30.788 ***** 2026-02-05 01:03:02.383224 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:03:02.383230 | orchestrator | 2026-02-05 01:03:02.383236 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-05 01:03:02.383242 | orchestrator | Thursday 05 February 2026 01:00:38 +0000 (0:00:00.617) 0:00:31.406 ***** 2026-02-05 01:03:02.383253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:03:02.383259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:03:02.383268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:03:02.383274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.383393 | orchestrator | 2026-02-05 01:03:02.383399 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-05 01:03:02.383405 | orchestrator | Thursday 05 February 2026 01:00:45 +0000 (0:00:06.710) 0:00:38.117 ***** 2026-02-05 01:03:02.383758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:03:02.383777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 01:03:02.383784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.383795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.383801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.383807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.383818 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:02.383824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:03:02.383835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 01:03:02.383841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.383847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.383855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.383861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:03:02.383871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.383877 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:02.383887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 01:03:02.383893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.383900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.383908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.383914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.383923 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:02.383929 | orchestrator | 2026-02-05 01:03:02.383935 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-05 01:03:02.383941 | orchestrator | Thursday 05 February 2026 01:00:45 +0000 (0:00:00.729) 0:00:38.846 ***** 2026-02-05 01:03:02.383947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:03:02.383956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 01:03:02.383962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.383968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.383975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.383987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.383993 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:02.383999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:03:02.384009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 01:03:02.384015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384043 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:02.384049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:03:02.384057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 01:03:02.384062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384094 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:02.384100 | orchestrator | 2026-02-05 01:03:02.384106 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-05 01:03:02.384111 | orchestrator | Thursday 05 February 2026 01:00:47 +0000 (0:00:01.303) 0:00:40.150 ***** 2026-02-05 01:03:02.384116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:03:02.384125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:03:02.384131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:03:02.384139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384245 | orchestrator | 2026-02-05 01:03:02.384250 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-05 01:03:02.384255 | orchestrator | Thursday 05 February 2026 01:00:54 +0000 (0:00:07.144) 0:00:47.294 ***** 2026-02-05 01:03:02.384261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:03:02.384266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:03:02.384275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:03:02.384281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384386 | orchestrator | 2026-02-05 01:03:02.384392 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-05 01:03:02.384398 | orchestrator | Thursday 05 February 2026 01:01:14 +0000 (0:00:20.142) 0:01:07.437 ***** 2026-02-05 01:03:02.384404 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-05 01:03:02.384414 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-05 01:03:02.384420 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-05 01:03:02.384426 | orchestrator | 2026-02-05 01:03:02.384432 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-05 01:03:02.384438 | orchestrator | Thursday 05 February 2026 01:01:20 +0000 (0:00:06.425) 0:01:13.862 ***** 2026-02-05 01:03:02.384444 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-05 01:03:02.384451 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-05 01:03:02.384457 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-05 01:03:02.384463 | orchestrator | 2026-02-05 01:03:02.384469 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-05 01:03:02.384475 | orchestrator | Thursday 05 February 2026 01:01:24 +0000 (0:00:03.317) 0:01:17.182 ***** 2026-02-05 01:03:02.384482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:03:02.384493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:03:02.384504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:03:02.384511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384654 | orchestrator | 2026-02-05 01:03:02.384660 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-05 01:03:02.384665 | orchestrator | Thursday 05 February 2026 01:01:27 +0000 (0:00:03.341) 0:01:20.524 ***** 2026-02-05 01:03:02.384671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:03:02.384678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:03:02.384692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:03:02.384699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.384855 | orchestrator | 2026-02-05 01:03:02.384861 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-05 01:03:02.384867 | orchestrator | Thursday 05 February 2026 01:01:30 +0000 (0:00:03.218) 0:01:23.742 ***** 2026-02-05 01:03:02.384873 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:02.384878 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:02.384884 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:02.384889 | orchestrator | 2026-02-05 01:03:02.384898 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-05 01:03:02.384904 | orchestrator | Thursday 05 February 2026 01:01:31 +0000 (0:00:00.423) 0:01:24.166 ***** 2026-02-05 01:03:02.384911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:03:02.384920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 01:03:02.384939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.384967 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:02.384976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:03:02.384986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 01:03:02.384992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.385001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.385008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.385014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.385020 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:02.385028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-05 01:03:02.385040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-05 01:03:02.385046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.385054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.385061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.385067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:03:02.385072 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:02.385078 | orchestrator | 2026-02-05 01:03:02.385083 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-05 01:03:02.385089 | orchestrator | Thursday 05 February 2026 01:01:32 +0000 (0:00:01.541) 0:01:25.707 ***** 2026-02-05 01:03:02.385097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:03:02.385147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:03:02.385157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-05 01:03:02.385164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.385170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.385177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-05 01:03:02.385186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.385192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.385198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.385206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.385212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.385218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.385226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.385236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.385242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.385248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.385257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.385263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:03:02.385269 | orchestrator | 2026-02-05 01:03:02.385275 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-05 01:03:02.385281 | orchestrator | Thursday 05 February 2026 01:01:36 +0000 (0:00:04.266) 0:01:29.974 ***** 2026-02-05 01:03:02.385286 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:02.385292 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:02.385298 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:02.385308 | orchestrator | 2026-02-05 01:03:02.385313 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-05 01:03:02.385319 | orchestrator | Thursday 05 February 2026 01:01:37 +0000 (0:00:00.544) 0:01:30.519 ***** 2026-02-05 01:03:02.385325 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-05 01:03:02.385331 | orchestrator | 2026-02-05 01:03:02.385336 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-05 01:03:02.385342 | orchestrator | Thursday 05 February 2026 01:01:40 +0000 (0:00:02.531) 0:01:33.051 ***** 2026-02-05 01:03:02.385348 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 01:03:02.385354 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-05 01:03:02.385359 | orchestrator | 2026-02-05 01:03:02.385365 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-05 01:03:02.385373 | orchestrator | Thursday 05 February 2026 01:01:42 +0000 (0:00:02.211) 0:01:35.262 ***** 2026-02-05 01:03:02.385378 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:02.385384 | orchestrator | 2026-02-05 01:03:02.385389 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-05 01:03:02.385394 | orchestrator | Thursday 05 February 2026 01:01:56 +0000 (0:00:14.111) 0:01:49.374 ***** 2026-02-05 01:03:02.385400 | orchestrator | 2026-02-05 01:03:02.385404 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-05 01:03:02.385409 | orchestrator | Thursday 05 February 2026 01:01:56 +0000 (0:00:00.222) 0:01:49.596 ***** 2026-02-05 01:03:02.385415 | orchestrator | 2026-02-05 01:03:02.385420 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-05 01:03:02.385426 | orchestrator | Thursday 05 February 2026 01:01:56 +0000 (0:00:00.060) 0:01:49.657 ***** 2026-02-05 01:03:02.385431 | orchestrator | 2026-02-05 01:03:02.385437 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-05 01:03:02.385442 | orchestrator | Thursday 05 February 2026 01:01:56 +0000 (0:00:00.060) 0:01:49.718 ***** 2026-02-05 01:03:02.385447 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:03:02.385452 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:02.385457 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:03:02.385462 | orchestrator | 2026-02-05 01:03:02.385468 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-05 01:03:02.385473 | orchestrator | Thursday 05 February 2026 01:02:10 +0000 (0:00:14.236) 0:02:03.954 ***** 2026-02-05 01:03:02.385478 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:02.385484 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:03:02.385489 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:03:02.385494 | orchestrator | 2026-02-05 01:03:02.385499 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-05 01:03:02.385504 | orchestrator | Thursday 05 February 2026 01:02:17 +0000 (0:00:06.914) 0:02:10.868 ***** 2026-02-05 01:03:02.385509 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:02.385514 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:03:02.385520 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:03:02.385525 | orchestrator | 2026-02-05 01:03:02.385574 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-05 01:03:02.385581 | orchestrator | Thursday 05 February 2026 01:02:27 +0000 (0:00:09.656) 0:02:20.525 ***** 2026-02-05 01:03:02.385587 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:03:02.385592 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:03:02.385598 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:02.385603 | orchestrator | 2026-02-05 01:03:02.385609 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-05 01:03:02.385614 | orchestrator | Thursday 05 February 2026 01:02:36 +0000 (0:00:08.777) 0:02:29.302 ***** 2026-02-05 01:03:02.385619 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:02.385625 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:03:02.385630 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:03:02.385641 | orchestrator | 2026-02-05 01:03:02.385646 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-05 01:03:02.385651 | orchestrator | Thursday 05 February 2026 01:02:46 +0000 (0:00:09.965) 0:02:39.268 ***** 2026-02-05 01:03:02.385656 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:02.385662 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:03:02.385667 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:03:02.385672 | orchestrator | 2026-02-05 01:03:02.385682 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-05 01:03:02.385688 | orchestrator | Thursday 05 February 2026 01:02:51 +0000 (0:00:05.713) 0:02:44.981 ***** 2026-02-05 01:03:02.385693 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:02.385699 | orchestrator | 2026-02-05 01:03:02.385704 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:03:02.385710 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:03:02.385716 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 01:03:02.385721 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 01:03:02.385726 | orchestrator | 2026-02-05 01:03:02.385731 | orchestrator | 2026-02-05 01:03:02.385736 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:03:02.385741 | orchestrator | Thursday 05 February 2026 01:02:58 +0000 (0:00:06.977) 0:02:51.959 ***** 2026-02-05 01:03:02.385747 | orchestrator | =============================================================================== 2026-02-05 01:03:02.385752 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.14s 2026-02-05 01:03:02.385757 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.24s 2026-02-05 01:03:02.385762 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.11s 2026-02-05 01:03:02.385767 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.97s 2026-02-05 01:03:02.385772 | orchestrator | designate : Restart designate-central container ------------------------- 9.66s 2026-02-05 01:03:02.385777 | orchestrator | designate : Restart designate-producer container ------------------------ 8.78s 2026-02-05 01:03:02.385782 | orchestrator | designate : Copying over config.json files for services ----------------- 7.14s 2026-02-05 01:03:02.385787 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.10s 2026-02-05 01:03:02.385792 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.98s 2026-02-05 01:03:02.385797 | orchestrator | designate : Restart designate-api container ----------------------------- 6.91s 2026-02-05 01:03:02.385807 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.71s 2026-02-05 01:03:02.385812 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.43s 2026-02-05 01:03:02.385817 | orchestrator | designate : Restart designate-worker container -------------------------- 5.71s 2026-02-05 01:03:02.385823 | orchestrator | designate : Check designate containers ---------------------------------- 4.27s 2026-02-05 01:03:02.385828 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.92s 2026-02-05 01:03:02.385833 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.80s 2026-02-05 01:03:02.385838 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.73s 2026-02-05 01:03:02.385844 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.70s 2026-02-05 01:03:02.385849 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.55s 2026-02-05 01:03:02.385854 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.34s 2026-02-05 01:03:02.385860 | orchestrator | 2026-02-05 01:03:02 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:02.385870 | orchestrator | 2026-02-05 01:03:02 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:02.385875 | orchestrator | 2026-02-05 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:05.409630 | orchestrator | 2026-02-05 01:03:05 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:05.411700 | orchestrator | 2026-02-05 01:03:05 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:03:05.411756 | orchestrator | 2026-02-05 01:03:05 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:05.411767 | orchestrator | 2026-02-05 01:03:05 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:05.411775 | orchestrator | 2026-02-05 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:08.451849 | orchestrator | 2026-02-05 01:03:08 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:08.453383 | orchestrator | 2026-02-05 01:03:08 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:03:08.455103 | orchestrator | 2026-02-05 01:03:08 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:08.456575 | orchestrator | 2026-02-05 01:03:08 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:08.456620 | orchestrator | 2026-02-05 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:11.500956 | orchestrator | 2026-02-05 01:03:11 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:11.506752 | orchestrator | 2026-02-05 01:03:11 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:03:11.508354 | orchestrator | 2026-02-05 01:03:11 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:11.510085 | orchestrator | 2026-02-05 01:03:11 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:11.510128 | orchestrator | 2026-02-05 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:14.550420 | orchestrator | 2026-02-05 01:03:14 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:14.552046 | orchestrator | 2026-02-05 01:03:14 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state STARTED 2026-02-05 01:03:14.553919 | orchestrator | 2026-02-05 01:03:14 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:14.556102 | orchestrator | 2026-02-05 01:03:14 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:14.556153 | orchestrator | 2026-02-05 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:17.607737 | orchestrator | 2026-02-05 01:03:17 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:17.609340 | orchestrator | 2026-02-05 01:03:17 | INFO  | Task 94427496-e207-4b5d-b6c3-811ad4b7d277 is in state SUCCESS 2026-02-05 01:03:17.609749 | orchestrator | 2026-02-05 01:03:17.611181 | orchestrator | 2026-02-05 01:03:17.611227 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:03:17.611239 | orchestrator | 2026-02-05 01:03:17.611248 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:03:17.611257 | orchestrator | Thursday 05 February 2026 01:02:15 +0000 (0:00:00.190) 0:00:00.190 ***** 2026-02-05 01:03:17.611266 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:03:17.611275 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:03:17.611301 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:03:17.611311 | orchestrator | 2026-02-05 01:03:17.611320 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:03:17.611338 | orchestrator | Thursday 05 February 2026 01:02:15 +0000 (0:00:00.265) 0:00:00.455 ***** 2026-02-05 01:03:17.611347 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-05 01:03:17.611356 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-05 01:03:17.611365 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-05 01:03:17.611373 | orchestrator | 2026-02-05 01:03:17.611382 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-05 01:03:17.611391 | orchestrator | 2026-02-05 01:03:17.611400 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-05 01:03:17.611408 | orchestrator | Thursday 05 February 2026 01:02:15 +0000 (0:00:00.356) 0:00:00.812 ***** 2026-02-05 01:03:17.611417 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:03:17.611426 | orchestrator | 2026-02-05 01:03:17.611434 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-05 01:03:17.611443 | orchestrator | Thursday 05 February 2026 01:02:16 +0000 (0:00:00.481) 0:00:01.293 ***** 2026-02-05 01:03:17.611451 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-05 01:03:17.611460 | orchestrator | 2026-02-05 01:03:17.611469 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-05 01:03:17.611477 | orchestrator | Thursday 05 February 2026 01:02:20 +0000 (0:00:03.960) 0:00:05.253 ***** 2026-02-05 01:03:17.611486 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-05 01:03:17.611495 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-05 01:03:17.611503 | orchestrator | 2026-02-05 01:03:17.611512 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-05 01:03:17.611539 | orchestrator | Thursday 05 February 2026 01:02:27 +0000 (0:00:06.954) 0:00:12.208 ***** 2026-02-05 01:03:17.611549 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 01:03:17.611558 | orchestrator | 2026-02-05 01:03:17.611567 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-05 01:03:17.611576 | orchestrator | Thursday 05 February 2026 01:02:30 +0000 (0:00:03.164) 0:00:15.373 ***** 2026-02-05 01:03:17.611585 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-05 01:03:17.611594 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:03:17.611603 | orchestrator | 2026-02-05 01:03:17.611611 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-05 01:03:17.611619 | orchestrator | Thursday 05 February 2026 01:02:34 +0000 (0:00:03.910) 0:00:19.283 ***** 2026-02-05 01:03:17.611628 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 01:03:17.611637 | orchestrator | 2026-02-05 01:03:17.611647 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-05 01:03:17.611655 | orchestrator | Thursday 05 February 2026 01:02:37 +0000 (0:00:03.721) 0:00:23.005 ***** 2026-02-05 01:03:17.611664 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-05 01:03:17.611673 | orchestrator | 2026-02-05 01:03:17.611682 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-05 01:03:17.611691 | orchestrator | Thursday 05 February 2026 01:02:41 +0000 (0:00:03.805) 0:00:26.810 ***** 2026-02-05 01:03:17.611699 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:17.611708 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:17.611716 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:17.611724 | orchestrator | 2026-02-05 01:03:17.611733 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-05 01:03:17.611741 | orchestrator | Thursday 05 February 2026 01:02:41 +0000 (0:00:00.274) 0:00:27.085 ***** 2026-02-05 01:03:17.611760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:03:17.611788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:03:17.611799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:03:17.611809 | orchestrator | 2026-02-05 01:03:17.611818 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-05 01:03:17.611828 | orchestrator | Thursday 05 February 2026 01:02:42 +0000 (0:00:00.710) 0:00:27.796 ***** 2026-02-05 01:03:17.611837 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:17.611846 | orchestrator | 2026-02-05 01:03:17.611855 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-05 01:03:17.611864 | orchestrator | Thursday 05 February 2026 01:02:42 +0000 (0:00:00.120) 0:00:27.916 ***** 2026-02-05 01:03:17.611873 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:17.611881 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:17.611890 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:17.611899 | orchestrator | 2026-02-05 01:03:17.611908 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-05 01:03:17.611917 | orchestrator | Thursday 05 February 2026 01:02:43 +0000 (0:00:00.377) 0:00:28.294 ***** 2026-02-05 01:03:17.611925 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:03:17.611934 | orchestrator | 2026-02-05 01:03:17.611943 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-05 01:03:17.611957 | orchestrator | Thursday 05 February 2026 01:02:43 +0000 (0:00:00.471) 0:00:28.765 ***** 2026-02-05 01:03:17.611966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:03:17.611983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:03:17.611996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:03:17.612002 | orchestrator | 2026-02-05 01:03:17.612007 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-05 01:03:17.612012 | orchestrator | Thursday 05 February 2026 01:02:44 +0000 (0:00:01.151) 0:00:29.916 ***** 2026-02-05 01:03:17.612017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 01:03:17.612026 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:17.612032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 01:03:17.612037 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:17.612045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 01:03:17.612051 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:17.612056 | orchestrator | 2026-02-05 01:03:17.612061 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-05 01:03:17.612066 | orchestrator | Thursday 05 February 2026 01:02:45 +0000 (0:00:00.698) 0:00:30.615 ***** 2026-02-05 01:03:17.612074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 01:03:17.612080 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:17.612085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 01:03:17.612093 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:17.612099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 01:03:17.612105 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:17.612110 | orchestrator | 2026-02-05 01:03:17.612115 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-05 01:03:17.612120 | orchestrator | Thursday 05 February 2026 01:02:46 +0000 (0:00:00.628) 0:00:31.244 ***** 2026-02-05 01:03:17.612128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:03:17.612137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:03:17.612143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:03:17.612151 | orchestrator | 2026-02-05 01:03:17.612156 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-05 01:03:17.612162 | orchestrator | Thursday 05 February 2026 01:02:47 +0000 (0:00:01.098) 0:00:32.342 ***** 2026-02-05 01:03:17.612167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:03:17.612173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:03:17.612188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:03:17.612194 | orchestrator | 2026-02-05 01:03:17.612199 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-05 01:03:17.612204 | orchestrator | Thursday 05 February 2026 01:02:49 +0000 (0:00:02.386) 0:00:34.728 ***** 2026-02-05 01:03:17.612210 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-05 01:03:17.612215 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-05 01:03:17.612220 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-05 01:03:17.612225 | orchestrator | 2026-02-05 01:03:17.612231 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-05 01:03:17.612236 | orchestrator | Thursday 05 February 2026 01:02:51 +0000 (0:00:01.409) 0:00:36.137 ***** 2026-02-05 01:03:17.612245 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:17.612250 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:03:17.612255 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:03:17.612260 | orchestrator | 2026-02-05 01:03:17.612265 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-05 01:03:17.612271 | orchestrator | Thursday 05 February 2026 01:02:52 +0000 (0:00:01.174) 0:00:37.311 ***** 2026-02-05 01:03:17.612276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 01:03:17.612281 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:03:17.612287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 01:03:17.612292 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:03:17.612301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-05 01:03:17.612307 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:03:17.612312 | orchestrator | 2026-02-05 01:03:17.612317 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-05 01:03:17.612323 | orchestrator | Thursday 05 February 2026 01:02:52 +0000 (0:00:00.415) 0:00:37.727 ***** 2026-02-05 01:03:17.612330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:03:17.612339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:03:17.612344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-05 01:03:17.612350 | orchestrator | 2026-02-05 01:03:17.612355 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-05 01:03:17.612360 | orchestrator | Thursday 05 February 2026 01:02:53 +0000 (0:00:01.038) 0:00:38.766 ***** 2026-02-05 01:03:17.612366 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:17.612371 | orchestrator | 2026-02-05 01:03:17.612376 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-05 01:03:17.612381 | orchestrator | Thursday 05 February 2026 01:02:55 +0000 (0:00:01.868) 0:00:40.634 ***** 2026-02-05 01:03:17.612387 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:17.612392 | orchestrator | 2026-02-05 01:03:17.612397 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-05 01:03:17.612402 | orchestrator | Thursday 05 February 2026 01:02:57 +0000 (0:00:02.304) 0:00:42.939 ***** 2026-02-05 01:03:17.612408 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:17.612413 | orchestrator | 2026-02-05 01:03:17.612418 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-05 01:03:17.612423 | orchestrator | Thursday 05 February 2026 01:03:11 +0000 (0:00:13.514) 0:00:56.454 ***** 2026-02-05 01:03:17.612428 | orchestrator | 2026-02-05 01:03:17.612434 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-05 01:03:17.612439 | orchestrator | Thursday 05 February 2026 01:03:11 +0000 (0:00:00.058) 0:00:56.512 ***** 2026-02-05 01:03:17.612444 | orchestrator | 2026-02-05 01:03:17.612452 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-05 01:03:17.612457 | orchestrator | Thursday 05 February 2026 01:03:11 +0000 (0:00:00.058) 0:00:56.570 ***** 2026-02-05 01:03:17.612466 | orchestrator | 2026-02-05 01:03:17.612471 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-05 01:03:17.612477 | orchestrator | Thursday 05 February 2026 01:03:11 +0000 (0:00:00.061) 0:00:56.632 ***** 2026-02-05 01:03:17.612482 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:03:17.612487 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:03:17.612492 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:03:17.612498 | orchestrator | 2026-02-05 01:03:17.612503 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:03:17.612511 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 01:03:17.612516 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 01:03:17.612539 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 01:03:17.612547 | orchestrator | 2026-02-05 01:03:17.612552 | orchestrator | 2026-02-05 01:03:17.612557 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:03:17.612562 | orchestrator | Thursday 05 February 2026 01:03:16 +0000 (0:00:04.920) 0:01:01.552 ***** 2026-02-05 01:03:17.612568 | orchestrator | =============================================================================== 2026-02-05 01:03:17.612573 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.51s 2026-02-05 01:03:17.612578 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.95s 2026-02-05 01:03:17.612583 | orchestrator | placement : Restart placement-api container ----------------------------- 4.92s 2026-02-05 01:03:17.612588 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.96s 2026-02-05 01:03:17.612594 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.91s 2026-02-05 01:03:17.612599 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.81s 2026-02-05 01:03:17.612691 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.72s 2026-02-05 01:03:17.612698 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.16s 2026-02-05 01:03:17.612703 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.39s 2026-02-05 01:03:17.612709 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.30s 2026-02-05 01:03:17.612714 | orchestrator | placement : Creating placement databases -------------------------------- 1.87s 2026-02-05 01:03:17.612719 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.41s 2026-02-05 01:03:17.612724 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.17s 2026-02-05 01:03:17.612729 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.15s 2026-02-05 01:03:17.612734 | orchestrator | placement : Copying over config.json files for services ----------------- 1.10s 2026-02-05 01:03:17.612739 | orchestrator | placement : Check placement containers ---------------------------------- 1.04s 2026-02-05 01:03:17.612744 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.71s 2026-02-05 01:03:17.612749 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.70s 2026-02-05 01:03:17.612755 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.63s 2026-02-05 01:03:17.612760 | orchestrator | placement : include_tasks ----------------------------------------------- 0.48s 2026-02-05 01:03:17.612768 | orchestrator | 2026-02-05 01:03:17 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:17.615382 | orchestrator | 2026-02-05 01:03:17 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:17.615658 | orchestrator | 2026-02-05 01:03:17 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:20.669157 | orchestrator | 2026-02-05 01:03:20 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:20.669262 | orchestrator | 2026-02-05 01:03:20 | INFO  | Task 56f14fd0-c023-4254-a85a-0f5df8c8f3e0 is in state STARTED 2026-02-05 01:03:20.670287 | orchestrator | 2026-02-05 01:03:20 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:20.671688 | orchestrator | 2026-02-05 01:03:20 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:20.671721 | orchestrator | 2026-02-05 01:03:20 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:23.712155 | orchestrator | 2026-02-05 01:03:23 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:23.712807 | orchestrator | 2026-02-05 01:03:23 | INFO  | Task 56f14fd0-c023-4254-a85a-0f5df8c8f3e0 is in state SUCCESS 2026-02-05 01:03:23.714749 | orchestrator | 2026-02-05 01:03:23 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:23.717150 | orchestrator | 2026-02-05 01:03:23 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:23.717275 | orchestrator | 2026-02-05 01:03:23 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:26.755707 | orchestrator | 2026-02-05 01:03:26 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:26.756588 | orchestrator | 2026-02-05 01:03:26 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:03:26.759428 | orchestrator | 2026-02-05 01:03:26 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:26.760576 | orchestrator | 2026-02-05 01:03:26 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:26.760611 | orchestrator | 2026-02-05 01:03:26 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:29.784259 | orchestrator | 2026-02-05 01:03:29 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:29.786571 | orchestrator | 2026-02-05 01:03:29 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:03:29.788863 | orchestrator | 2026-02-05 01:03:29 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:29.790782 | orchestrator | 2026-02-05 01:03:29 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:29.791113 | orchestrator | 2026-02-05 01:03:29 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:32.975875 | orchestrator | 2026-02-05 01:03:32 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:32.976145 | orchestrator | 2026-02-05 01:03:32 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:03:32.977026 | orchestrator | 2026-02-05 01:03:32 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:32.977547 | orchestrator | 2026-02-05 01:03:32 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:32.977569 | orchestrator | 2026-02-05 01:03:32 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:36.018498 | orchestrator | 2026-02-05 01:03:36 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:36.019169 | orchestrator | 2026-02-05 01:03:36 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:03:36.019979 | orchestrator | 2026-02-05 01:03:36 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:36.021358 | orchestrator | 2026-02-05 01:03:36 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:36.021407 | orchestrator | 2026-02-05 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:39.054192 | orchestrator | 2026-02-05 01:03:39 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:39.055883 | orchestrator | 2026-02-05 01:03:39 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:03:39.057861 | orchestrator | 2026-02-05 01:03:39 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:39.059558 | orchestrator | 2026-02-05 01:03:39 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:39.059606 | orchestrator | 2026-02-05 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:42.086187 | orchestrator | 2026-02-05 01:03:42 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:42.087123 | orchestrator | 2026-02-05 01:03:42 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:03:42.087670 | orchestrator | 2026-02-05 01:03:42 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:42.088325 | orchestrator | 2026-02-05 01:03:42 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:42.088357 | orchestrator | 2026-02-05 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:45.121005 | orchestrator | 2026-02-05 01:03:45 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:45.122899 | orchestrator | 2026-02-05 01:03:45 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:03:45.124463 | orchestrator | 2026-02-05 01:03:45 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:45.127066 | orchestrator | 2026-02-05 01:03:45 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:45.127132 | orchestrator | 2026-02-05 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:48.152870 | orchestrator | 2026-02-05 01:03:48 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:48.153902 | orchestrator | 2026-02-05 01:03:48 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:03:48.154925 | orchestrator | 2026-02-05 01:03:48 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:48.155539 | orchestrator | 2026-02-05 01:03:48 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:48.155552 | orchestrator | 2026-02-05 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:51.188634 | orchestrator | 2026-02-05 01:03:51 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:51.189512 | orchestrator | 2026-02-05 01:03:51 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:03:51.190177 | orchestrator | 2026-02-05 01:03:51 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:51.193363 | orchestrator | 2026-02-05 01:03:51 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:51.194570 | orchestrator | 2026-02-05 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:54.221788 | orchestrator | 2026-02-05 01:03:54 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:54.233618 | orchestrator | 2026-02-05 01:03:54 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:03:54.233668 | orchestrator | 2026-02-05 01:03:54 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:54.233676 | orchestrator | 2026-02-05 01:03:54 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:54.233682 | orchestrator | 2026-02-05 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:03:57.249753 | orchestrator | 2026-02-05 01:03:57 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:03:57.250355 | orchestrator | 2026-02-05 01:03:57 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:03:57.250939 | orchestrator | 2026-02-05 01:03:57 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:03:57.251846 | orchestrator | 2026-02-05 01:03:57 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:03:57.251871 | orchestrator | 2026-02-05 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:00.281944 | orchestrator | 2026-02-05 01:04:00 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:00.282548 | orchestrator | 2026-02-05 01:04:00 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:00.283781 | orchestrator | 2026-02-05 01:04:00 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:04:00.284380 | orchestrator | 2026-02-05 01:04:00 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:00.284536 | orchestrator | 2026-02-05 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:03.304936 | orchestrator | 2026-02-05 01:04:03 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:03.305285 | orchestrator | 2026-02-05 01:04:03 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:03.315937 | orchestrator | 2026-02-05 01:04:03 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state STARTED 2026-02-05 01:04:03.316445 | orchestrator | 2026-02-05 01:04:03 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:03.316876 | orchestrator | 2026-02-05 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:06.349898 | orchestrator | 2026-02-05 01:04:06 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:06.351769 | orchestrator | 2026-02-05 01:04:06 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:06.355157 | orchestrator | 2026-02-05 01:04:06 | INFO  | Task 569c7396-5918-445b-b988-d6bf1557a949 is in state SUCCESS 2026-02-05 01:04:06.356953 | orchestrator | 2026-02-05 01:04:06.356994 | orchestrator | 2026-02-05 01:04:06.357006 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:04:06.357018 | orchestrator | 2026-02-05 01:04:06.357028 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:04:06.357039 | orchestrator | Thursday 05 February 2026 01:03:21 +0000 (0:00:00.175) 0:00:00.175 ***** 2026-02-05 01:04:06.357049 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:04:06.357061 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:04:06.357071 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:04:06.357082 | orchestrator | 2026-02-05 01:04:06.357092 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:04:06.357581 | orchestrator | Thursday 05 February 2026 01:03:21 +0000 (0:00:00.319) 0:00:00.494 ***** 2026-02-05 01:04:06.357601 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-05 01:04:06.357641 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-05 01:04:06.357652 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-05 01:04:06.357662 | orchestrator | 2026-02-05 01:04:06.357672 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-02-05 01:04:06.357681 | orchestrator | 2026-02-05 01:04:06.357691 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-02-05 01:04:06.357701 | orchestrator | Thursday 05 February 2026 01:03:22 +0000 (0:00:00.792) 0:00:01.287 ***** 2026-02-05 01:04:06.357712 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:04:06.357722 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:04:06.357732 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:04:06.357742 | orchestrator | 2026-02-05 01:04:06.357752 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:04:06.357763 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:04:06.357774 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:04:06.357784 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:04:06.357794 | orchestrator | 2026-02-05 01:04:06.357804 | orchestrator | 2026-02-05 01:04:06.357814 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:04:06.357825 | orchestrator | Thursday 05 February 2026 01:03:22 +0000 (0:00:00.751) 0:00:02.038 ***** 2026-02-05 01:04:06.357835 | orchestrator | =============================================================================== 2026-02-05 01:04:06.357844 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2026-02-05 01:04:06.357854 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.75s 2026-02-05 01:04:06.357863 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-02-05 01:04:06.357872 | orchestrator | 2026-02-05 01:04:06.357881 | orchestrator | 2026-02-05 01:04:06.357890 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:04:06.357899 | orchestrator | 2026-02-05 01:04:06.357908 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:04:06.357918 | orchestrator | Thursday 05 February 2026 01:00:07 +0000 (0:00:00.198) 0:00:00.198 ***** 2026-02-05 01:04:06.357928 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:04:06.357938 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:04:06.357947 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:04:06.357956 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:04:06.357965 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:04:06.357974 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:04:06.357983 | orchestrator | 2026-02-05 01:04:06.357992 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:04:06.358001 | orchestrator | Thursday 05 February 2026 01:00:07 +0000 (0:00:00.623) 0:00:00.821 ***** 2026-02-05 01:04:06.358011 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-05 01:04:06.358059 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-05 01:04:06.358069 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-05 01:04:06.358079 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-05 01:04:06.358088 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-05 01:04:06.358098 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-05 01:04:06.358108 | orchestrator | 2026-02-05 01:04:06.358117 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-05 01:04:06.358127 | orchestrator | 2026-02-05 01:04:06.358136 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-05 01:04:06.358144 | orchestrator | Thursday 05 February 2026 01:00:08 +0000 (0:00:00.608) 0:00:01.430 ***** 2026-02-05 01:04:06.358164 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:04:06.358175 | orchestrator | 2026-02-05 01:04:06.358184 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-05 01:04:06.358193 | orchestrator | Thursday 05 February 2026 01:00:09 +0000 (0:00:01.018) 0:00:02.449 ***** 2026-02-05 01:04:06.358203 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:04:06.358213 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:04:06.358222 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:04:06.358232 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:04:06.358241 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:04:06.358251 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:04:06.358261 | orchestrator | 2026-02-05 01:04:06.358271 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-05 01:04:06.358282 | orchestrator | Thursday 05 February 2026 01:00:10 +0000 (0:00:01.071) 0:00:03.521 ***** 2026-02-05 01:04:06.358292 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:04:06.358302 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:04:06.358312 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:04:06.358332 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:04:06.358351 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:04:06.358412 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:04:06.358424 | orchestrator | 2026-02-05 01:04:06.358435 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-05 01:04:06.358445 | orchestrator | Thursday 05 February 2026 01:00:11 +0000 (0:00:00.896) 0:00:04.418 ***** 2026-02-05 01:04:06.358454 | orchestrator | ok: [testbed-node-0] => { 2026-02-05 01:04:06.358465 | orchestrator |  "changed": false, 2026-02-05 01:04:06.358476 | orchestrator |  "msg": "All assertions passed" 2026-02-05 01:04:06.358486 | orchestrator | } 2026-02-05 01:04:06.358519 | orchestrator | ok: [testbed-node-1] => { 2026-02-05 01:04:06.358537 | orchestrator |  "changed": false, 2026-02-05 01:04:06.358547 | orchestrator |  "msg": "All assertions passed" 2026-02-05 01:04:06.358558 | orchestrator | } 2026-02-05 01:04:06.358567 | orchestrator | ok: [testbed-node-2] => { 2026-02-05 01:04:06.358577 | orchestrator |  "changed": false, 2026-02-05 01:04:06.358586 | orchestrator |  "msg": "All assertions passed" 2026-02-05 01:04:06.358595 | orchestrator | } 2026-02-05 01:04:06.358605 | orchestrator | ok: [testbed-node-3] => { 2026-02-05 01:04:06.358622 | orchestrator |  "changed": false, 2026-02-05 01:04:06.358633 | orchestrator |  "msg": "All assertions passed" 2026-02-05 01:04:06.358643 | orchestrator | } 2026-02-05 01:04:06.358652 | orchestrator | ok: [testbed-node-4] => { 2026-02-05 01:04:06.358661 | orchestrator |  "changed": false, 2026-02-05 01:04:06.358671 | orchestrator |  "msg": "All assertions passed" 2026-02-05 01:04:06.358680 | orchestrator | } 2026-02-05 01:04:06.358689 | orchestrator | ok: [testbed-node-5] => { 2026-02-05 01:04:06.358699 | orchestrator |  "changed": false, 2026-02-05 01:04:06.358708 | orchestrator |  "msg": "All assertions passed" 2026-02-05 01:04:06.358718 | orchestrator | } 2026-02-05 01:04:06.358728 | orchestrator | 2026-02-05 01:04:06.358738 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-05 01:04:06.358748 | orchestrator | Thursday 05 February 2026 01:00:12 +0000 (0:00:00.580) 0:00:04.998 ***** 2026-02-05 01:04:06.358758 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.358767 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.358777 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.358787 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.358796 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.358806 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.358815 | orchestrator | 2026-02-05 01:04:06.358912 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-05 01:04:06.358930 | orchestrator | Thursday 05 February 2026 01:00:12 +0000 (0:00:00.535) 0:00:05.534 ***** 2026-02-05 01:04:06.358940 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-05 01:04:06.358960 | orchestrator | 2026-02-05 01:04:06.358970 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-05 01:04:06.358979 | orchestrator | Thursday 05 February 2026 01:00:16 +0000 (0:00:03.392) 0:00:08.926 ***** 2026-02-05 01:04:06.358988 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-05 01:04:06.358999 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-05 01:04:06.359009 | orchestrator | 2026-02-05 01:04:06.359019 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-05 01:04:06.359029 | orchestrator | Thursday 05 February 2026 01:00:23 +0000 (0:00:07.152) 0:00:16.079 ***** 2026-02-05 01:04:06.359039 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 01:04:06.359048 | orchestrator | 2026-02-05 01:04:06.359057 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-05 01:04:06.359067 | orchestrator | Thursday 05 February 2026 01:00:26 +0000 (0:00:03.140) 0:00:19.219 ***** 2026-02-05 01:04:06.359075 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-05 01:04:06.359084 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:04:06.359093 | orchestrator | 2026-02-05 01:04:06.359103 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-05 01:04:06.359165 | orchestrator | Thursday 05 February 2026 01:00:30 +0000 (0:00:04.119) 0:00:23.338 ***** 2026-02-05 01:04:06.359180 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 01:04:06.359190 | orchestrator | 2026-02-05 01:04:06.359200 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-05 01:04:06.359210 | orchestrator | Thursday 05 February 2026 01:00:33 +0000 (0:00:03.526) 0:00:26.865 ***** 2026-02-05 01:04:06.359220 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-05 01:04:06.359228 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-05 01:04:06.359234 | orchestrator | 2026-02-05 01:04:06.359240 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-05 01:04:06.359246 | orchestrator | Thursday 05 February 2026 01:00:42 +0000 (0:00:08.023) 0:00:34.888 ***** 2026-02-05 01:04:06.359252 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.359258 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.359264 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.359269 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.359275 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.359281 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.359287 | orchestrator | 2026-02-05 01:04:06.359293 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-05 01:04:06.359299 | orchestrator | Thursday 05 February 2026 01:00:42 +0000 (0:00:00.699) 0:00:35.587 ***** 2026-02-05 01:04:06.359304 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.359311 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.359316 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.359322 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.359328 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.359334 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.359339 | orchestrator | 2026-02-05 01:04:06.359345 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-05 01:04:06.359351 | orchestrator | Thursday 05 February 2026 01:00:44 +0000 (0:00:02.177) 0:00:37.765 ***** 2026-02-05 01:04:06.359357 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:04:06.359363 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:04:06.359369 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:04:06.359374 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:04:06.359380 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:04:06.359411 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:04:06.359418 | orchestrator | 2026-02-05 01:04:06.359424 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-05 01:04:06.359437 | orchestrator | Thursday 05 February 2026 01:00:46 +0000 (0:00:01.840) 0:00:39.605 ***** 2026-02-05 01:04:06.359443 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.359449 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.359455 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.359460 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.359466 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.359472 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.359478 | orchestrator | 2026-02-05 01:04:06.359484 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-05 01:04:06.359506 | orchestrator | Thursday 05 February 2026 01:00:49 +0000 (0:00:03.067) 0:00:42.673 ***** 2026-02-05 01:04:06.359521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:04:06.359531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:04:06.359538 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:04:06.359545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:04:06.359584 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:04:06.359592 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:04:06.359598 | orchestrator | 2026-02-05 01:04:06.359604 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-05 01:04:06.359610 | orchestrator | Thursday 05 February 2026 01:00:52 +0000 (0:00:02.965) 0:00:45.639 ***** 2026-02-05 01:04:06.359616 | orchestrator | [WARNING]: Skipped 2026-02-05 01:04:06.359623 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-05 01:04:06.359629 | orchestrator | due to this access issue: 2026-02-05 01:04:06.359635 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-05 01:04:06.359641 | orchestrator | a directory 2026-02-05 01:04:06.359647 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:04:06.359653 | orchestrator | 2026-02-05 01:04:06.359659 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-05 01:04:06.359665 | orchestrator | Thursday 05 February 2026 01:00:53 +0000 (0:00:00.869) 0:00:46.508 ***** 2026-02-05 01:04:06.359671 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:04:06.359678 | orchestrator | 2026-02-05 01:04:06.359684 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-05 01:04:06.359689 | orchestrator | Thursday 05 February 2026 01:00:54 +0000 (0:00:01.065) 0:00:47.574 ***** 2026-02-05 01:04:06.359695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:04:06.359721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:04:06.359732 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:04:06.359738 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:04:06.359745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:04:06.359751 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:04:06.359760 | orchestrator | 2026-02-05 01:04:06.359767 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-05 01:04:06.359773 | orchestrator | Thursday 05 February 2026 01:00:58 +0000 (0:00:04.144) 0:00:51.718 ***** 2026-02-05 01:04:06.359795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.359805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.359812 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.359819 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.359827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.359834 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.359842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.359853 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.359860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.359867 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.359893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.359901 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.359908 | orchestrator | 2026-02-05 01:04:06.359916 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-05 01:04:06.359922 | orchestrator | Thursday 05 February 2026 01:01:01 +0000 (0:00:02.883) 0:00:54.602 ***** 2026-02-05 01:04:06.359932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.359939 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.359947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.359955 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.359962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.359973 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.359980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.359987 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.360002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.360010 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.360017 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.360024 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.360032 | orchestrator | 2026-02-05 01:04:06.360038 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-05 01:04:06.360045 | orchestrator | Thursday 05 February 2026 01:01:04 +0000 (0:00:02.757) 0:00:57.359 ***** 2026-02-05 01:04:06.360052 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.360060 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.360066 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.360073 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.360080 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.360087 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.360098 | orchestrator | 2026-02-05 01:04:06.360105 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-05 01:04:06.360112 | orchestrator | Thursday 05 February 2026 01:01:06 +0000 (0:00:02.390) 0:00:59.750 ***** 2026-02-05 01:04:06.360120 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.360126 | orchestrator | 2026-02-05 01:04:06.360133 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-05 01:04:06.360140 | orchestrator | Thursday 05 February 2026 01:01:06 +0000 (0:00:00.092) 0:00:59.842 ***** 2026-02-05 01:04:06.360147 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.360154 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.360161 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.360168 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.360175 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.360181 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.360187 | orchestrator | 2026-02-05 01:04:06.360193 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-05 01:04:06.360198 | orchestrator | Thursday 05 February 2026 01:01:07 +0000 (0:00:00.682) 0:01:00.524 ***** 2026-02-05 01:04:06.360205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.360211 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.360222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.360229 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.360237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.360243 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.360249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.360259 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.360265 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.360271 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.360277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.360283 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.360289 | orchestrator | 2026-02-05 01:04:06.360295 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-05 01:04:06.360301 | orchestrator | Thursday 05 February 2026 01:01:10 +0000 (0:00:02.911) 0:01:03.436 ***** 2026-02-05 01:04:06.360311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:04:06.360318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:04:06.360327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:04:06.360355 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:04:06.360362 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:04:06.360373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:04:06.360379 | orchestrator | 2026-02-05 01:04:06.360389 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-05 01:04:06.360395 | orchestrator | Thursday 05 February 2026 01:01:14 +0000 (0:00:03.769) 0:01:07.205 ***** 2026-02-05 01:04:06.360405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:04:06.360411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:04:06.360418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:04:06.360427 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:04:06.360436 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:04:06.360446 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:04:06.360452 | orchestrator | 2026-02-05 01:04:06.360458 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-05 01:04:06.360467 | orchestrator | Thursday 05 February 2026 01:01:20 +0000 (0:00:06.573) 0:01:13.779 ***** 2026-02-05 01:04:06.360478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.360556 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.360572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.360582 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.360597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.360615 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.360630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.360640 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.360649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.360659 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.360669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.360678 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.360687 | orchestrator | 2026-02-05 01:04:06.360696 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-05 01:04:06.360706 | orchestrator | Thursday 05 February 2026 01:01:23 +0000 (0:00:02.709) 0:01:16.488 ***** 2026-02-05 01:04:06.360715 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.360725 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.360734 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.360742 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:04:06.360752 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:04:06.360761 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:04:06.360770 | orchestrator | 2026-02-05 01:04:06.360779 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-05 01:04:06.360788 | orchestrator | Thursday 05 February 2026 01:01:26 +0000 (0:00:02.825) 0:01:19.314 ***** 2026-02-05 01:04:06.360796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.360810 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.360829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.360838 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.360847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.360857 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.360866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:04:06.360876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:04:06.360890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:04:06.360905 | orchestrator | 2026-02-05 01:04:06.360914 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-05 01:04:06.360923 | orchestrator | Thursday 05 February 2026 01:01:30 +0000 (0:00:03.877) 0:01:23.192 ***** 2026-02-05 01:04:06.360932 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.360942 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.360951 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.360960 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.360970 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.360979 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.360989 | orchestrator | 2026-02-05 01:04:06.360994 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-05 01:04:06.361003 | orchestrator | Thursday 05 February 2026 01:01:32 +0000 (0:00:02.383) 0:01:25.576 ***** 2026-02-05 01:04:06.361008 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.361014 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.361019 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.361024 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.361030 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.361035 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.361041 | orchestrator | 2026-02-05 01:04:06.361046 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-05 01:04:06.361051 | orchestrator | Thursday 05 February 2026 01:01:34 +0000 (0:00:02.031) 0:01:27.607 ***** 2026-02-05 01:04:06.361057 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.361062 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.361068 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.361073 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.361079 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.361084 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.361089 | orchestrator | 2026-02-05 01:04:06.361095 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-05 01:04:06.361100 | orchestrator | Thursday 05 February 2026 01:01:36 +0000 (0:00:01.855) 0:01:29.462 ***** 2026-02-05 01:04:06.361106 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.361111 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.361117 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.361122 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.361127 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.361133 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.361138 | orchestrator | 2026-02-05 01:04:06.361144 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-05 01:04:06.361149 | orchestrator | Thursday 05 February 2026 01:01:38 +0000 (0:00:01.875) 0:01:31.338 ***** 2026-02-05 01:04:06.361155 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.361160 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.361166 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.361171 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.361176 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.361182 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.361187 | orchestrator | 2026-02-05 01:04:06.361193 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-05 01:04:06.361198 | orchestrator | Thursday 05 February 2026 01:01:40 +0000 (0:00:02.082) 0:01:33.421 ***** 2026-02-05 01:04:06.361208 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.361213 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.361219 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.361224 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.361230 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.361235 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.361240 | orchestrator | 2026-02-05 01:04:06.361246 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-05 01:04:06.361251 | orchestrator | Thursday 05 February 2026 01:01:42 +0000 (0:00:02.042) 0:01:35.463 ***** 2026-02-05 01:04:06.361257 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-05 01:04:06.361262 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.361268 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-05 01:04:06.361274 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.361279 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-05 01:04:06.361285 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.361290 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-05 01:04:06.361296 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.361301 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-05 01:04:06.361307 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.361312 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-05 01:04:06.361318 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.361323 | orchestrator | 2026-02-05 01:04:06.361329 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-05 01:04:06.361334 | orchestrator | Thursday 05 February 2026 01:01:44 +0000 (0:00:01.783) 0:01:37.247 ***** 2026-02-05 01:04:06.361345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.361352 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.361364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.361379 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.361389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.361405 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.361415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.361423 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.361429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.361435 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.361445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.361451 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.361457 | orchestrator | 2026-02-05 01:04:06.361465 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-05 01:04:06.361471 | orchestrator | Thursday 05 February 2026 01:01:45 +0000 (0:00:01.518) 0:01:38.766 ***** 2026-02-05 01:04:06.361539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.361553 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.361559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.361565 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.361571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.361580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.361587 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.361592 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.361604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.361613 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.361619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.361625 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.361630 | orchestrator | 2026-02-05 01:04:06.361636 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-05 01:04:06.361641 | orchestrator | Thursday 05 February 2026 01:01:47 +0000 (0:00:01.544) 0:01:40.310 ***** 2026-02-05 01:04:06.361647 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.361652 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.361658 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.361663 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.361669 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.361674 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.361679 | orchestrator | 2026-02-05 01:04:06.361685 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-05 01:04:06.361690 | orchestrator | Thursday 05 February 2026 01:01:49 +0000 (0:00:01.977) 0:01:42.288 ***** 2026-02-05 01:04:06.361696 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.361701 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.361707 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.361712 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:04:06.361718 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:04:06.361723 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:04:06.361729 | orchestrator | 2026-02-05 01:04:06.361734 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-05 01:04:06.361740 | orchestrator | Thursday 05 February 2026 01:01:52 +0000 (0:00:03.413) 0:01:45.702 ***** 2026-02-05 01:04:06.361745 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.361751 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.361756 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.361761 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.361767 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.361772 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.361778 | orchestrator | 2026-02-05 01:04:06.361783 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-05 01:04:06.361789 | orchestrator | Thursday 05 February 2026 01:01:54 +0000 (0:00:01.589) 0:01:47.292 ***** 2026-02-05 01:04:06.361794 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.361800 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.361805 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.361811 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.361816 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.361821 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.361827 | orchestrator | 2026-02-05 01:04:06.361832 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-05 01:04:06.361838 | orchestrator | Thursday 05 February 2026 01:01:56 +0000 (0:00:01.977) 0:01:49.269 ***** 2026-02-05 01:04:06.361843 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.361849 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.361854 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.361863 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.361868 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.361874 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.361879 | orchestrator | 2026-02-05 01:04:06.361885 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-05 01:04:06.361890 | orchestrator | Thursday 05 February 2026 01:01:59 +0000 (0:00:03.162) 0:01:52.431 ***** 2026-02-05 01:04:06.361896 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.361901 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.361907 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.361912 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.361917 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.361926 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.361931 | orchestrator | 2026-02-05 01:04:06.361937 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-05 01:04:06.361942 | orchestrator | Thursday 05 February 2026 01:02:02 +0000 (0:00:02.826) 0:01:55.258 ***** 2026-02-05 01:04:06.361948 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.361953 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.361959 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.361964 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.361969 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.361975 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.361981 | orchestrator | 2026-02-05 01:04:06.361986 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-05 01:04:06.361992 | orchestrator | Thursday 05 February 2026 01:02:04 +0000 (0:00:01.667) 0:01:56.926 ***** 2026-02-05 01:04:06.361997 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.362005 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.362011 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.362051 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.362057 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.362063 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.362068 | orchestrator | 2026-02-05 01:04:06.362074 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-05 01:04:06.362079 | orchestrator | Thursday 05 February 2026 01:02:05 +0000 (0:00:01.897) 0:01:58.823 ***** 2026-02-05 01:04:06.362085 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.362090 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.362096 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.362101 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.362107 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.362112 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.362118 | orchestrator | 2026-02-05 01:04:06.362123 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-05 01:04:06.362129 | orchestrator | Thursday 05 February 2026 01:02:07 +0000 (0:00:01.646) 0:02:00.470 ***** 2026-02-05 01:04:06.362134 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-05 01:04:06.362140 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.362145 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-05 01:04:06.362151 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.362156 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-05 01:04:06.362162 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.362167 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-05 01:04:06.362173 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.362178 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-05 01:04:06.362184 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.362190 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-05 01:04:06.362199 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.362205 | orchestrator | 2026-02-05 01:04:06.362210 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-05 01:04:06.362216 | orchestrator | Thursday 05 February 2026 01:02:09 +0000 (0:00:02.130) 0:02:02.600 ***** 2026-02-05 01:04:06.362221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.362227 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.362237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.362243 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.362251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-05 01:04:06.362257 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.362263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.362272 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.362277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.362283 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.362289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-05 01:04:06.362294 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.362300 | orchestrator | 2026-02-05 01:04:06.362305 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-05 01:04:06.362311 | orchestrator | Thursday 05 February 2026 01:02:11 +0000 (0:00:02.230) 0:02:04.831 ***** 2026-02-05 01:04:06.362320 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:04:06.362330 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:04:06.362337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:04:06.362351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:04:06.362360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-05 01:04:06.362375 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-05 01:04:06.362388 | orchestrator | 2026-02-05 01:04:06.362397 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-05 01:04:06.362405 | orchestrator | Thursday 05 February 2026 01:02:15 +0000 (0:00:03.466) 0:02:08.297 ***** 2026-02-05 01:04:06.362413 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:06.362421 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:06.362428 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:06.362440 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:04:06.362449 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:04:06.362456 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:04:06.362464 | orchestrator | 2026-02-05 01:04:06.362473 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-05 01:04:06.362482 | orchestrator | Thursday 05 February 2026 01:02:15 +0000 (0:00:00.518) 0:02:08.816 ***** 2026-02-05 01:04:06.362569 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:04:06.362581 | orchestrator | 2026-02-05 01:04:06.362592 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-05 01:04:06.362608 | orchestrator | Thursday 05 February 2026 01:02:18 +0000 (0:00:02.386) 0:02:11.203 ***** 2026-02-05 01:04:06.362616 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:04:06.362624 | orchestrator | 2026-02-05 01:04:06.362632 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-05 01:04:06.362639 | orchestrator | Thursday 05 February 2026 01:02:21 +0000 (0:00:02.751) 0:02:13.954 ***** 2026-02-05 01:04:06.362648 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:04:06.362656 | orchestrator | 2026-02-05 01:04:06.362664 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-05 01:04:06.362672 | orchestrator | Thursday 05 February 2026 01:03:00 +0000 (0:00:39.458) 0:02:53.412 ***** 2026-02-05 01:04:06.362680 | orchestrator | 2026-02-05 01:04:06.362688 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-05 01:04:06.362697 | orchestrator | Thursday 05 February 2026 01:03:00 +0000 (0:00:00.062) 0:02:53.475 ***** 2026-02-05 01:04:06.362705 | orchestrator | 2026-02-05 01:04:06.362713 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-05 01:04:06.362721 | orchestrator | Thursday 05 February 2026 01:03:00 +0000 (0:00:00.059) 0:02:53.535 ***** 2026-02-05 01:04:06.362729 | orchestrator | 2026-02-05 01:04:06.362737 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-05 01:04:06.362746 | orchestrator | Thursday 05 February 2026 01:03:00 +0000 (0:00:00.174) 0:02:53.710 ***** 2026-02-05 01:04:06.362754 | orchestrator | 2026-02-05 01:04:06.362762 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-05 01:04:06.362770 | orchestrator | Thursday 05 February 2026 01:03:00 +0000 (0:00:00.059) 0:02:53.770 ***** 2026-02-05 01:04:06.362779 | orchestrator | 2026-02-05 01:04:06.362787 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-05 01:04:06.362796 | orchestrator | Thursday 05 February 2026 01:03:00 +0000 (0:00:00.063) 0:02:53.833 ***** 2026-02-05 01:04:06.362804 | orchestrator | 2026-02-05 01:04:06.362813 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-05 01:04:06.362821 | orchestrator | Thursday 05 February 2026 01:03:01 +0000 (0:00:00.061) 0:02:53.895 ***** 2026-02-05 01:04:06.362829 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:04:06.362837 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:04:06.362846 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:04:06.362855 | orchestrator | 2026-02-05 01:04:06.362864 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-05 01:04:06.362874 | orchestrator | Thursday 05 February 2026 01:03:24 +0000 (0:00:23.060) 0:03:16.956 ***** 2026-02-05 01:04:06.362884 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:04:06.362891 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:04:06.362896 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:04:06.362902 | orchestrator | 2026-02-05 01:04:06.362907 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:04:06.362913 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 01:04:06.362920 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-05 01:04:06.362925 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-05 01:04:06.362931 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 01:04:06.362936 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 01:04:06.362942 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-05 01:04:06.362954 | orchestrator | 2026-02-05 01:04:06.362959 | orchestrator | 2026-02-05 01:04:06.362965 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:04:06.362970 | orchestrator | Thursday 05 February 2026 01:04:04 +0000 (0:00:40.226) 0:03:57.182 ***** 2026-02-05 01:04:06.362982 | orchestrator | =============================================================================== 2026-02-05 01:04:06.362988 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 40.23s 2026-02-05 01:04:06.362993 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.46s 2026-02-05 01:04:06.362999 | orchestrator | neutron : Restart neutron-server container ----------------------------- 23.06s 2026-02-05 01:04:06.363004 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.02s 2026-02-05 01:04:06.363010 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.15s 2026-02-05 01:04:06.363015 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.57s 2026-02-05 01:04:06.363021 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.14s 2026-02-05 01:04:06.363030 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.12s 2026-02-05 01:04:06.363036 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.88s 2026-02-05 01:04:06.363041 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.77s 2026-02-05 01:04:06.363047 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.53s 2026-02-05 01:04:06.363052 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.47s 2026-02-05 01:04:06.363058 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.41s 2026-02-05 01:04:06.363063 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.39s 2026-02-05 01:04:06.363068 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.16s 2026-02-05 01:04:06.363074 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.14s 2026-02-05 01:04:06.363079 | orchestrator | Setting sysctl values --------------------------------------------------- 3.07s 2026-02-05 01:04:06.363085 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.97s 2026-02-05 01:04:06.363095 | orchestrator | neutron : Copying over existing policy file ----------------------------- 2.91s 2026-02-05 01:04:06.363104 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 2.88s 2026-02-05 01:04:06.363114 | orchestrator | 2026-02-05 01:04:06 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:06.363123 | orchestrator | 2026-02-05 01:04:06 | INFO  | Task 3bda310e-2525-4083-9acd-62dc61d18f11 is in state STARTED 2026-02-05 01:04:06.363133 | orchestrator | 2026-02-05 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:09.391663 | orchestrator | 2026-02-05 01:04:09 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:09.391866 | orchestrator | 2026-02-05 01:04:09 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:09.393027 | orchestrator | 2026-02-05 01:04:09 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:09.394816 | orchestrator | 2026-02-05 01:04:09 | INFO  | Task 3bda310e-2525-4083-9acd-62dc61d18f11 is in state STARTED 2026-02-05 01:04:09.394864 | orchestrator | 2026-02-05 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:12.429850 | orchestrator | 2026-02-05 01:04:12 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:12.431387 | orchestrator | 2026-02-05 01:04:12 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:12.433786 | orchestrator | 2026-02-05 01:04:12 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:12.436111 | orchestrator | 2026-02-05 01:04:12 | INFO  | Task 3bda310e-2525-4083-9acd-62dc61d18f11 is in state STARTED 2026-02-05 01:04:12.436148 | orchestrator | 2026-02-05 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:15.475574 | orchestrator | 2026-02-05 01:04:15 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:15.475658 | orchestrator | 2026-02-05 01:04:15 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:15.476031 | orchestrator | 2026-02-05 01:04:15 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:15.476985 | orchestrator | 2026-02-05 01:04:15 | INFO  | Task 3bda310e-2525-4083-9acd-62dc61d18f11 is in state STARTED 2026-02-05 01:04:15.477048 | orchestrator | 2026-02-05 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:18.527794 | orchestrator | 2026-02-05 01:04:18 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:18.529656 | orchestrator | 2026-02-05 01:04:18 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:18.532595 | orchestrator | 2026-02-05 01:04:18 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:18.534243 | orchestrator | 2026-02-05 01:04:18 | INFO  | Task 3bda310e-2525-4083-9acd-62dc61d18f11 is in state STARTED 2026-02-05 01:04:18.534336 | orchestrator | 2026-02-05 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:21.573290 | orchestrator | 2026-02-05 01:04:21 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:21.575390 | orchestrator | 2026-02-05 01:04:21 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:21.577216 | orchestrator | 2026-02-05 01:04:21 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:21.578627 | orchestrator | 2026-02-05 01:04:21 | INFO  | Task 3bda310e-2525-4083-9acd-62dc61d18f11 is in state STARTED 2026-02-05 01:04:21.578670 | orchestrator | 2026-02-05 01:04:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:24.625209 | orchestrator | 2026-02-05 01:04:24 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:24.626804 | orchestrator | 2026-02-05 01:04:24 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:24.628561 | orchestrator | 2026-02-05 01:04:24 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:24.630112 | orchestrator | 2026-02-05 01:04:24 | INFO  | Task 3bda310e-2525-4083-9acd-62dc61d18f11 is in state STARTED 2026-02-05 01:04:24.630159 | orchestrator | 2026-02-05 01:04:24 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:27.673836 | orchestrator | 2026-02-05 01:04:27 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:27.674736 | orchestrator | 2026-02-05 01:04:27 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:27.676579 | orchestrator | 2026-02-05 01:04:27 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:27.677248 | orchestrator | 2026-02-05 01:04:27 | INFO  | Task 3bda310e-2525-4083-9acd-62dc61d18f11 is in state STARTED 2026-02-05 01:04:27.677269 | orchestrator | 2026-02-05 01:04:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:30.724366 | orchestrator | 2026-02-05 01:04:30 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:30.724807 | orchestrator | 2026-02-05 01:04:30 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:30.725372 | orchestrator | 2026-02-05 01:04:30 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:30.726236 | orchestrator | 2026-02-05 01:04:30 | INFO  | Task 3bda310e-2525-4083-9acd-62dc61d18f11 is in state STARTED 2026-02-05 01:04:30.726276 | orchestrator | 2026-02-05 01:04:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:33.771829 | orchestrator | 2026-02-05 01:04:33 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:33.773932 | orchestrator | 2026-02-05 01:04:33 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:33.774711 | orchestrator | 2026-02-05 01:04:33 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:33.775283 | orchestrator | 2026-02-05 01:04:33 | INFO  | Task 3bda310e-2525-4083-9acd-62dc61d18f11 is in state STARTED 2026-02-05 01:04:33.775309 | orchestrator | 2026-02-05 01:04:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:36.816368 | orchestrator | 2026-02-05 01:04:36 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:36.816968 | orchestrator | 2026-02-05 01:04:36 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:36.817648 | orchestrator | 2026-02-05 01:04:36 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:36.818959 | orchestrator | 2026-02-05 01:04:36 | INFO  | Task 3bda310e-2525-4083-9acd-62dc61d18f11 is in state SUCCESS 2026-02-05 01:04:36.820616 | orchestrator | 2026-02-05 01:04:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:39.859107 | orchestrator | 2026-02-05 01:04:39 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:04:39.859186 | orchestrator | 2026-02-05 01:04:39 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:39.861589 | orchestrator | 2026-02-05 01:04:39 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:39.861994 | orchestrator | 2026-02-05 01:04:39 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:39.862245 | orchestrator | 2026-02-05 01:04:39 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:42.901578 | orchestrator | 2026-02-05 01:04:42 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:04:42.902773 | orchestrator | 2026-02-05 01:04:42 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:42.904044 | orchestrator | 2026-02-05 01:04:42 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:42.904662 | orchestrator | 2026-02-05 01:04:42 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:42.904777 | orchestrator | 2026-02-05 01:04:42 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:45.937687 | orchestrator | 2026-02-05 01:04:45 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:04:45.939385 | orchestrator | 2026-02-05 01:04:45 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:45.940774 | orchestrator | 2026-02-05 01:04:45 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:45.942187 | orchestrator | 2026-02-05 01:04:45 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:45.942241 | orchestrator | 2026-02-05 01:04:45 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:48.984178 | orchestrator | 2026-02-05 01:04:48 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:04:48.985872 | orchestrator | 2026-02-05 01:04:48 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:48.987287 | orchestrator | 2026-02-05 01:04:48 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:48.988992 | orchestrator | 2026-02-05 01:04:48 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:48.989387 | orchestrator | 2026-02-05 01:04:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:52.031207 | orchestrator | 2026-02-05 01:04:52 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:04:52.033016 | orchestrator | 2026-02-05 01:04:52 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:52.034517 | orchestrator | 2026-02-05 01:04:52 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:52.037018 | orchestrator | 2026-02-05 01:04:52 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:52.037066 | orchestrator | 2026-02-05 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:55.075194 | orchestrator | 2026-02-05 01:04:55 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:04:55.079734 | orchestrator | 2026-02-05 01:04:55 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state STARTED 2026-02-05 01:04:55.083786 | orchestrator | 2026-02-05 01:04:55 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:55.086627 | orchestrator | 2026-02-05 01:04:55 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:55.086699 | orchestrator | 2026-02-05 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:04:58.130359 | orchestrator | 2026-02-05 01:04:58 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:04:58.133421 | orchestrator | 2026-02-05 01:04:58 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:04:58.134675 | orchestrator | 2026-02-05 01:04:58 | INFO  | Task 9bc53dcb-77e2-42eb-8c3d-6b2f0b05ad29 is in state SUCCESS 2026-02-05 01:04:58.135889 | orchestrator | 2026-02-05 01:04:58.135942 | orchestrator | 2026-02-05 01:04:58.135951 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:04:58.135959 | orchestrator | 2026-02-05 01:04:58.135966 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:04:58.135974 | orchestrator | Thursday 05 February 2026 01:04:08 +0000 (0:00:00.208) 0:00:00.208 ***** 2026-02-05 01:04:58.135981 | orchestrator | ok: [testbed-manager] 2026-02-05 01:04:58.135988 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:04:58.135994 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:04:58.136000 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:04:58.136007 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:04:58.136013 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:04:58.136018 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:04:58.136024 | orchestrator | 2026-02-05 01:04:58.136031 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:04:58.136037 | orchestrator | Thursday 05 February 2026 01:04:08 +0000 (0:00:00.598) 0:00:00.806 ***** 2026-02-05 01:04:58.136045 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-05 01:04:58.136051 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-05 01:04:58.136083 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-05 01:04:58.136089 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-05 01:04:58.136106 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-05 01:04:58.136113 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-05 01:04:58.136119 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-05 01:04:58.136125 | orchestrator | 2026-02-05 01:04:58.136132 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-05 01:04:58.136138 | orchestrator | 2026-02-05 01:04:58.136144 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-05 01:04:58.136150 | orchestrator | Thursday 05 February 2026 01:04:09 +0000 (0:00:00.499) 0:00:01.306 ***** 2026-02-05 01:04:58.136359 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:04:58.136389 | orchestrator | 2026-02-05 01:04:58.136397 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-05 01:04:58.136404 | orchestrator | Thursday 05 February 2026 01:04:10 +0000 (0:00:01.022) 0:00:02.329 ***** 2026-02-05 01:04:58.136410 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-05 01:04:58.136417 | orchestrator | 2026-02-05 01:04:58.136424 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-05 01:04:58.136430 | orchestrator | Thursday 05 February 2026 01:04:13 +0000 (0:00:03.234) 0:00:05.563 ***** 2026-02-05 01:04:58.136437 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-05 01:04:58.136447 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-05 01:04:58.136515 | orchestrator | 2026-02-05 01:04:58.136726 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-05 01:04:58.136732 | orchestrator | Thursday 05 February 2026 01:04:19 +0000 (0:00:06.036) 0:00:11.600 ***** 2026-02-05 01:04:58.136737 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-05 01:04:58.136741 | orchestrator | 2026-02-05 01:04:58.136745 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-05 01:04:58.136749 | orchestrator | Thursday 05 February 2026 01:04:22 +0000 (0:00:02.763) 0:00:14.364 ***** 2026-02-05 01:04:58.136753 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-05 01:04:58.136758 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:04:58.136762 | orchestrator | 2026-02-05 01:04:58.136766 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-05 01:04:58.136770 | orchestrator | Thursday 05 February 2026 01:04:25 +0000 (0:00:03.477) 0:00:17.842 ***** 2026-02-05 01:04:58.136773 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-05 01:04:58.136778 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-05 01:04:58.136781 | orchestrator | 2026-02-05 01:04:58.136785 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-05 01:04:58.136797 | orchestrator | Thursday 05 February 2026 01:04:31 +0000 (0:00:06.170) 0:00:24.012 ***** 2026-02-05 01:04:58.136801 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-05 01:04:58.136805 | orchestrator | 2026-02-05 01:04:58.136809 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:04:58.136812 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:04:58.136820 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:04:58.136827 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:04:58.136848 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:04:58.136856 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:04:58.136877 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:04:58.136885 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:04:58.136890 | orchestrator | 2026-02-05 01:04:58.136897 | orchestrator | 2026-02-05 01:04:58.136902 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:04:58.136909 | orchestrator | Thursday 05 February 2026 01:04:36 +0000 (0:00:04.101) 0:00:28.114 ***** 2026-02-05 01:04:58.136914 | orchestrator | =============================================================================== 2026-02-05 01:04:58.136920 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.17s 2026-02-05 01:04:58.136925 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.04s 2026-02-05 01:04:58.136932 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.10s 2026-02-05 01:04:58.136937 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.48s 2026-02-05 01:04:58.136944 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.23s 2026-02-05 01:04:58.136950 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.76s 2026-02-05 01:04:58.136958 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.02s 2026-02-05 01:04:58.136964 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.60s 2026-02-05 01:04:58.136970 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2026-02-05 01:04:58.136976 | orchestrator | 2026-02-05 01:04:58.136981 | orchestrator | 2026-02-05 01:04:58.136987 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:04:58.136992 | orchestrator | 2026-02-05 01:04:58.136998 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:04:58.137013 | orchestrator | Thursday 05 February 2026 01:03:03 +0000 (0:00:00.239) 0:00:00.239 ***** 2026-02-05 01:04:58.137020 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:04:58.137026 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:04:58.137031 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:04:58.137037 | orchestrator | 2026-02-05 01:04:58.137043 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:04:58.137049 | orchestrator | Thursday 05 February 2026 01:03:03 +0000 (0:00:00.302) 0:00:00.542 ***** 2026-02-05 01:04:58.137054 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-05 01:04:58.137059 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-05 01:04:58.137065 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-05 01:04:58.137071 | orchestrator | 2026-02-05 01:04:58.137076 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-05 01:04:58.137082 | orchestrator | 2026-02-05 01:04:58.137087 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-05 01:04:58.137093 | orchestrator | Thursday 05 February 2026 01:03:04 +0000 (0:00:00.419) 0:00:00.961 ***** 2026-02-05 01:04:58.137098 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:04:58.137104 | orchestrator | 2026-02-05 01:04:58.137109 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-05 01:04:58.137115 | orchestrator | Thursday 05 February 2026 01:03:04 +0000 (0:00:00.551) 0:00:01.513 ***** 2026-02-05 01:04:58.137121 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-05 01:04:58.137134 | orchestrator | 2026-02-05 01:04:58.137140 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-05 01:04:58.137145 | orchestrator | Thursday 05 February 2026 01:03:08 +0000 (0:00:03.402) 0:00:04.915 ***** 2026-02-05 01:04:58.137151 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-05 01:04:58.137157 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-05 01:04:58.137163 | orchestrator | 2026-02-05 01:04:58.137168 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-05 01:04:58.137174 | orchestrator | Thursday 05 February 2026 01:03:14 +0000 (0:00:06.481) 0:00:11.396 ***** 2026-02-05 01:04:58.137181 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 01:04:58.137188 | orchestrator | 2026-02-05 01:04:58.137193 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-05 01:04:58.137200 | orchestrator | Thursday 05 February 2026 01:03:17 +0000 (0:00:03.099) 0:00:14.495 ***** 2026-02-05 01:04:58.137205 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-05 01:04:58.137211 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:04:58.137217 | orchestrator | 2026-02-05 01:04:58.137222 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-05 01:04:58.137227 | orchestrator | Thursday 05 February 2026 01:03:21 +0000 (0:00:03.780) 0:00:18.276 ***** 2026-02-05 01:04:58.137233 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 01:04:58.137239 | orchestrator | 2026-02-05 01:04:58.137245 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-05 01:04:58.137250 | orchestrator | Thursday 05 February 2026 01:03:25 +0000 (0:00:04.175) 0:00:22.451 ***** 2026-02-05 01:04:58.137256 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-05 01:04:58.137262 | orchestrator | 2026-02-05 01:04:58.137267 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-05 01:04:58.137276 | orchestrator | Thursday 05 February 2026 01:03:29 +0000 (0:00:03.549) 0:00:26.001 ***** 2026-02-05 01:04:58.137286 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:04:58.137291 | orchestrator | 2026-02-05 01:04:58.137297 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-05 01:04:58.137311 | orchestrator | Thursday 05 February 2026 01:03:32 +0000 (0:00:03.021) 0:00:29.022 ***** 2026-02-05 01:04:58.137318 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:04:58.137325 | orchestrator | 2026-02-05 01:04:58.137331 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-05 01:04:58.137337 | orchestrator | Thursday 05 February 2026 01:03:36 +0000 (0:00:04.035) 0:00:33.058 ***** 2026-02-05 01:04:58.137342 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:04:58.137348 | orchestrator | 2026-02-05 01:04:58.137354 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-05 01:04:58.137361 | orchestrator | Thursday 05 February 2026 01:03:39 +0000 (0:00:03.278) 0:00:36.336 ***** 2026-02-05 01:04:58.137372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:04:58.137393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:04:58.137400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:04:58.137407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:04:58.137423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:04:58.137430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:04:58.137443 | orchestrator | 2026-02-05 01:04:58.137449 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-05 01:04:58.137539 | orchestrator | Thursday 05 February 2026 01:03:41 +0000 (0:00:01.644) 0:00:37.981 ***** 2026-02-05 01:04:58.137547 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:58.137553 | orchestrator | 2026-02-05 01:04:58.137559 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-05 01:04:58.137565 | orchestrator | Thursday 05 February 2026 01:03:41 +0000 (0:00:00.121) 0:00:38.102 ***** 2026-02-05 01:04:58.137571 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:58.137577 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:58.137583 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:58.137590 | orchestrator | 2026-02-05 01:04:58.137596 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-05 01:04:58.137602 | orchestrator | Thursday 05 February 2026 01:03:41 +0000 (0:00:00.431) 0:00:38.534 ***** 2026-02-05 01:04:58.137608 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:04:58.137615 | orchestrator | 2026-02-05 01:04:58.137621 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-05 01:04:58.137628 | orchestrator | Thursday 05 February 2026 01:03:42 +0000 (0:00:00.841) 0:00:39.375 ***** 2026-02-05 01:04:58.137635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:04:58.137642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:04:58.137656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:04:58.137674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:04:58.137681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:04:58.137688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:04:58.137695 | orchestrator | 2026-02-05 01:04:58.137703 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-05 01:04:58.137710 | orchestrator | Thursday 05 February 2026 01:03:45 +0000 (0:00:03.254) 0:00:42.630 ***** 2026-02-05 01:04:58.137717 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:04:58.137787 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:04:58.137796 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:04:58.137802 | orchestrator | 2026-02-05 01:04:58.137809 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-05 01:04:58.137816 | orchestrator | Thursday 05 February 2026 01:03:46 +0000 (0:00:00.347) 0:00:42.977 ***** 2026-02-05 01:04:58.137823 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:04:58.137829 | orchestrator | 2026-02-05 01:04:58.137835 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-05 01:04:58.137842 | orchestrator | Thursday 05 February 2026 01:03:46 +0000 (0:00:00.603) 0:00:43.580 ***** 2026-02-05 01:04:58.137855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:04:58.137868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:04:58.137878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:04:58.137885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:04:58.137892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:04:58.137906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:04:58.137929 | orchestrator | 2026-02-05 01:04:58.137936 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-05 01:04:58.137943 | orchestrator | Thursday 05 February 2026 01:03:48 +0000 (0:00:02.054) 0:00:45.635 ***** 2026-02-05 01:04:58.137949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 01:04:58.137964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:04:58.137971 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:58.137978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 01:04:58.137985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:04:58.137996 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:58.138009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 01:04:58.138186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:04:58.138200 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:58.138207 | orchestrator | 2026-02-05 01:04:58.138214 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-05 01:04:58.138221 | orchestrator | Thursday 05 February 2026 01:03:49 +0000 (0:00:00.534) 0:00:46.170 ***** 2026-02-05 01:04:58.138228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 01:04:58.138235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:04:58.138242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 01:04:58.138259 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:58.138266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:04:58.138273 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:58.138282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 01:04:58.138289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:04:58.138296 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:58.138303 | orchestrator | 2026-02-05 01:04:58.138309 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-05 01:04:58.138316 | orchestrator | Thursday 05 February 2026 01:03:50 +0000 (0:00:00.938) 0:00:47.108 ***** 2026-02-05 01:04:58.138324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:04:58.138356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:04:58.138364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:04:58.138376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:04:58.138383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:04:58.138389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:04:58.138401 | orchestrator | 2026-02-05 01:04:58.138408 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-05 01:04:58.138415 | orchestrator | Thursday 05 February 2026 01:03:53 +0000 (0:00:03.033) 0:00:50.142 ***** 2026-02-05 01:04:58.138428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:04:58.138435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:04:58.138446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:04:58.138479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:04:58.138493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:04:58.138505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:04:58.138512 | orchestrator | 2026-02-05 01:04:58.138519 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-05 01:04:58.138526 | orchestrator | Thursday 05 February 2026 01:04:02 +0000 (0:00:09.232) 0:00:59.374 ***** 2026-02-05 01:04:58.138536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 01:04:58.138543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:04:58.138550 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:58.138557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 01:04:58.138569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:04:58.138575 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:58.138587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-05 01:04:58.138596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:04:58.138603 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:58.138609 | orchestrator | 2026-02-05 01:04:58.138614 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-05 01:04:58.138620 | orchestrator | Thursday 05 February 2026 01:04:03 +0000 (0:00:00.741) 0:01:00.115 ***** 2026-02-05 01:04:58.138626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:04:58.138636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:04:58.138647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-05 01:04:58.138654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:04:58.138664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:04:58.138670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:04:58.138685 | orchestrator | 2026-02-05 01:04:58.138691 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-05 01:04:58.138697 | orchestrator | Thursday 05 February 2026 01:04:05 +0000 (0:00:01.790) 0:01:01.906 ***** 2026-02-05 01:04:58.138703 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:04:58.138709 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:04:58.138715 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:04:58.138721 | orchestrator | 2026-02-05 01:04:58.138726 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-05 01:04:58.138732 | orchestrator | Thursday 05 February 2026 01:04:05 +0000 (0:00:00.273) 0:01:02.180 ***** 2026-02-05 01:04:58.138738 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:04:58.138744 | orchestrator | 2026-02-05 01:04:58.138750 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-05 01:04:58.138757 | orchestrator | Thursday 05 February 2026 01:04:07 +0000 (0:00:01.922) 0:01:04.103 ***** 2026-02-05 01:04:58.138763 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:04:58.138769 | orchestrator | 2026-02-05 01:04:58.138775 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-05 01:04:58.138781 | orchestrator | Thursday 05 February 2026 01:04:09 +0000 (0:00:02.039) 0:01:06.143 ***** 2026-02-05 01:04:58.138788 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:04:58.138794 | orchestrator | 2026-02-05 01:04:58.138800 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-05 01:04:58.138806 | orchestrator | Thursday 05 February 2026 01:04:24 +0000 (0:00:15.467) 0:01:21.610 ***** 2026-02-05 01:04:58.138813 | orchestrator | 2026-02-05 01:04:58.138819 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-05 01:04:58.138826 | orchestrator | Thursday 05 February 2026 01:04:24 +0000 (0:00:00.062) 0:01:21.672 ***** 2026-02-05 01:04:58.138832 | orchestrator | 2026-02-05 01:04:58.138839 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-05 01:04:58.138846 | orchestrator | Thursday 05 February 2026 01:04:25 +0000 (0:00:00.062) 0:01:21.735 ***** 2026-02-05 01:04:58.138852 | orchestrator | 2026-02-05 01:04:58.138859 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-05 01:04:58.138866 | orchestrator | Thursday 05 February 2026 01:04:25 +0000 (0:00:00.060) 0:01:21.796 ***** 2026-02-05 01:04:58.138871 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:04:58.138877 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:04:58.138883 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:04:58.138890 | orchestrator | 2026-02-05 01:04:58.138896 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-05 01:04:58.138908 | orchestrator | Thursday 05 February 2026 01:04:42 +0000 (0:00:17.333) 0:01:39.129 ***** 2026-02-05 01:04:58.138915 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:04:58.138921 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:04:58.138927 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:04:58.138933 | orchestrator | 2026-02-05 01:04:58.138939 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:04:58.138947 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-05 01:04:58.138956 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 01:04:58.138962 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 01:04:58.138969 | orchestrator | 2026-02-05 01:04:58.138975 | orchestrator | 2026-02-05 01:04:58.138981 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:04:58.138994 | orchestrator | Thursday 05 February 2026 01:04:56 +0000 (0:00:14.321) 0:01:53.451 ***** 2026-02-05 01:04:58.139001 | orchestrator | =============================================================================== 2026-02-05 01:04:58.139008 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 17.33s 2026-02-05 01:04:58.139014 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.47s 2026-02-05 01:04:58.139020 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.32s 2026-02-05 01:04:58.139027 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 9.23s 2026-02-05 01:04:58.139037 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.48s 2026-02-05 01:04:58.139043 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 4.18s 2026-02-05 01:04:58.139050 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.04s 2026-02-05 01:04:58.139056 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.78s 2026-02-05 01:04:58.139062 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.55s 2026-02-05 01:04:58.139068 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.40s 2026-02-05 01:04:58.139075 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.28s 2026-02-05 01:04:58.139081 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.25s 2026-02-05 01:04:58.139087 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.10s 2026-02-05 01:04:58.139093 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.04s 2026-02-05 01:04:58.139099 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.02s 2026-02-05 01:04:58.139106 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.05s 2026-02-05 01:04:58.139113 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.04s 2026-02-05 01:04:58.139119 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.92s 2026-02-05 01:04:58.139126 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.79s 2026-02-05 01:04:58.139134 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.64s 2026-02-05 01:04:58.139142 | orchestrator | 2026-02-05 01:04:58 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:04:58.139149 | orchestrator | 2026-02-05 01:04:58 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:04:58.139158 | orchestrator | 2026-02-05 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:01.195186 | orchestrator | 2026-02-05 01:05:01 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:01.198245 | orchestrator | 2026-02-05 01:05:01 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:01.200990 | orchestrator | 2026-02-05 01:05:01 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:01.202701 | orchestrator | 2026-02-05 01:05:01 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:05:01.202818 | orchestrator | 2026-02-05 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:04.240079 | orchestrator | 2026-02-05 01:05:04 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:04.240170 | orchestrator | 2026-02-05 01:05:04 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:04.240178 | orchestrator | 2026-02-05 01:05:04 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:04.240185 | orchestrator | 2026-02-05 01:05:04 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:05:04.240219 | orchestrator | 2026-02-05 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:07.274113 | orchestrator | 2026-02-05 01:05:07 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:07.274316 | orchestrator | 2026-02-05 01:05:07 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:07.275100 | orchestrator | 2026-02-05 01:05:07 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:07.275813 | orchestrator | 2026-02-05 01:05:07 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:05:07.275848 | orchestrator | 2026-02-05 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:10.347733 | orchestrator | 2026-02-05 01:05:10 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:10.349484 | orchestrator | 2026-02-05 01:05:10 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:10.349527 | orchestrator | 2026-02-05 01:05:10 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:10.353280 | orchestrator | 2026-02-05 01:05:10 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:05:10.353333 | orchestrator | 2026-02-05 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:13.378781 | orchestrator | 2026-02-05 01:05:13 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:13.380165 | orchestrator | 2026-02-05 01:05:13 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:13.383628 | orchestrator | 2026-02-05 01:05:13 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:13.383683 | orchestrator | 2026-02-05 01:05:13 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:05:13.383693 | orchestrator | 2026-02-05 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:16.415620 | orchestrator | 2026-02-05 01:05:16 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:16.415862 | orchestrator | 2026-02-05 01:05:16 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:16.416555 | orchestrator | 2026-02-05 01:05:16 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:16.417133 | orchestrator | 2026-02-05 01:05:16 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state STARTED 2026-02-05 01:05:16.417148 | orchestrator | 2026-02-05 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:19.451512 | orchestrator | 2026-02-05 01:05:19 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:19.451655 | orchestrator | 2026-02-05 01:05:19 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:05:19.452302 | orchestrator | 2026-02-05 01:05:19 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:19.456030 | orchestrator | 2026-02-05 01:05:19 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:19.456848 | orchestrator | 2026-02-05 01:05:19 | INFO  | Task 4a4209b4-4584-4866-ba91-3b5df052ad7a is in state SUCCESS 2026-02-05 01:05:19.457309 | orchestrator | 2026-02-05 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:22.489911 | orchestrator | 2026-02-05 01:05:22 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:22.490089 | orchestrator | 2026-02-05 01:05:22 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:05:22.490106 | orchestrator | 2026-02-05 01:05:22 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:22.490542 | orchestrator | 2026-02-05 01:05:22 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:22.490563 | orchestrator | 2026-02-05 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:25.525676 | orchestrator | 2026-02-05 01:05:25 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:25.528317 | orchestrator | 2026-02-05 01:05:25 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:05:25.531321 | orchestrator | 2026-02-05 01:05:25 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:25.532801 | orchestrator | 2026-02-05 01:05:25 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:25.532873 | orchestrator | 2026-02-05 01:05:25 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:28.572384 | orchestrator | 2026-02-05 01:05:28 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:28.574943 | orchestrator | 2026-02-05 01:05:28 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:05:28.577370 | orchestrator | 2026-02-05 01:05:28 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:28.579409 | orchestrator | 2026-02-05 01:05:28 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:28.579581 | orchestrator | 2026-02-05 01:05:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:31.612182 | orchestrator | 2026-02-05 01:05:31 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:31.612771 | orchestrator | 2026-02-05 01:05:31 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:05:31.614667 | orchestrator | 2026-02-05 01:05:31 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:31.615279 | orchestrator | 2026-02-05 01:05:31 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:31.615308 | orchestrator | 2026-02-05 01:05:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:34.646154 | orchestrator | 2026-02-05 01:05:34 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:34.646520 | orchestrator | 2026-02-05 01:05:34 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:05:34.647277 | orchestrator | 2026-02-05 01:05:34 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:34.647981 | orchestrator | 2026-02-05 01:05:34 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:34.648019 | orchestrator | 2026-02-05 01:05:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:37.670472 | orchestrator | 2026-02-05 01:05:37 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:37.671635 | orchestrator | 2026-02-05 01:05:37 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:05:37.672392 | orchestrator | 2026-02-05 01:05:37 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:37.673040 | orchestrator | 2026-02-05 01:05:37 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:37.674269 | orchestrator | 2026-02-05 01:05:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:40.705683 | orchestrator | 2026-02-05 01:05:40 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:40.705751 | orchestrator | 2026-02-05 01:05:40 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:05:40.705761 | orchestrator | 2026-02-05 01:05:40 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:40.705768 | orchestrator | 2026-02-05 01:05:40 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:40.705776 | orchestrator | 2026-02-05 01:05:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:43.734636 | orchestrator | 2026-02-05 01:05:43 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:43.734750 | orchestrator | 2026-02-05 01:05:43 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:05:43.735095 | orchestrator | 2026-02-05 01:05:43 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:43.736243 | orchestrator | 2026-02-05 01:05:43 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:43.736296 | orchestrator | 2026-02-05 01:05:43 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:46.840795 | orchestrator | 2026-02-05 01:05:46 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:46.841797 | orchestrator | 2026-02-05 01:05:46 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:05:46.842548 | orchestrator | 2026-02-05 01:05:46 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:46.845564 | orchestrator | 2026-02-05 01:05:46 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:46.848108 | orchestrator | 2026-02-05 01:05:46 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:49.876303 | orchestrator | 2026-02-05 01:05:49 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:49.876389 | orchestrator | 2026-02-05 01:05:49 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:05:49.876964 | orchestrator | 2026-02-05 01:05:49 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:49.877410 | orchestrator | 2026-02-05 01:05:49 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:49.877447 | orchestrator | 2026-02-05 01:05:49 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:52.901780 | orchestrator | 2026-02-05 01:05:52 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:52.901923 | orchestrator | 2026-02-05 01:05:52 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:05:52.902589 | orchestrator | 2026-02-05 01:05:52 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:52.904092 | orchestrator | 2026-02-05 01:05:52 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:52.904134 | orchestrator | 2026-02-05 01:05:52 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:55.932492 | orchestrator | 2026-02-05 01:05:55 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:55.932607 | orchestrator | 2026-02-05 01:05:55 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:05:55.933334 | orchestrator | 2026-02-05 01:05:55 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:55.934775 | orchestrator | 2026-02-05 01:05:55 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:55.934818 | orchestrator | 2026-02-05 01:05:55 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:05:58.978333 | orchestrator | 2026-02-05 01:05:58 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:05:58.980072 | orchestrator | 2026-02-05 01:05:58 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:05:58.980545 | orchestrator | 2026-02-05 01:05:58 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:05:58.981674 | orchestrator | 2026-02-05 01:05:58 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:05:58.981717 | orchestrator | 2026-02-05 01:05:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:02.025611 | orchestrator | 2026-02-05 01:06:02 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:02.028117 | orchestrator | 2026-02-05 01:06:02 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:02.030755 | orchestrator | 2026-02-05 01:06:02 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:02.035006 | orchestrator | 2026-02-05 01:06:02 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:06:02.035074 | orchestrator | 2026-02-05 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:05.060020 | orchestrator | 2026-02-05 01:06:05 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:05.060170 | orchestrator | 2026-02-05 01:06:05 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:05.061902 | orchestrator | 2026-02-05 01:06:05 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:05.062322 | orchestrator | 2026-02-05 01:06:05 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:06:05.062590 | orchestrator | 2026-02-05 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:08.088091 | orchestrator | 2026-02-05 01:06:08 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:08.088295 | orchestrator | 2026-02-05 01:06:08 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:08.089389 | orchestrator | 2026-02-05 01:06:08 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:08.089809 | orchestrator | 2026-02-05 01:06:08 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:06:08.089829 | orchestrator | 2026-02-05 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:11.121618 | orchestrator | 2026-02-05 01:06:11 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:11.122630 | orchestrator | 2026-02-05 01:06:11 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:11.167707 | orchestrator | 2026-02-05 01:06:11 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:11.167783 | orchestrator | 2026-02-05 01:06:11 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:06:11.167794 | orchestrator | 2026-02-05 01:06:11 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:14.223904 | orchestrator | 2026-02-05 01:06:14 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:14.226589 | orchestrator | 2026-02-05 01:06:14 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:14.235837 | orchestrator | 2026-02-05 01:06:14 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:14.238973 | orchestrator | 2026-02-05 01:06:14 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:06:14.239018 | orchestrator | 2026-02-05 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:17.296171 | orchestrator | 2026-02-05 01:06:17 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:17.297910 | orchestrator | 2026-02-05 01:06:17 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:17.300939 | orchestrator | 2026-02-05 01:06:17 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:17.302126 | orchestrator | 2026-02-05 01:06:17 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:06:17.302177 | orchestrator | 2026-02-05 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:20.337197 | orchestrator | 2026-02-05 01:06:20 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:20.337546 | orchestrator | 2026-02-05 01:06:20 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:20.339985 | orchestrator | 2026-02-05 01:06:20 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:20.340042 | orchestrator | 2026-02-05 01:06:20 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:06:20.340051 | orchestrator | 2026-02-05 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:23.388694 | orchestrator | 2026-02-05 01:06:23 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:23.390816 | orchestrator | 2026-02-05 01:06:23 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:23.392383 | orchestrator | 2026-02-05 01:06:23 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:23.394124 | orchestrator | 2026-02-05 01:06:23 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state STARTED 2026-02-05 01:06:23.394162 | orchestrator | 2026-02-05 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:26.444679 | orchestrator | 2026-02-05 01:06:26 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:26.444935 | orchestrator | 2026-02-05 01:06:26 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:26.446161 | orchestrator | 2026-02-05 01:06:26 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:26.448901 | orchestrator | 2026-02-05 01:06:26 | INFO  | Task 7fd2a375-7a10-42ca-ae07-09d7fb798760 is in state SUCCESS 2026-02-05 01:06:26.450632 | orchestrator | 2026-02-05 01:06:26.450659 | orchestrator | 2026-02-05 01:06:26.450665 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-02-05 01:06:26.450669 | orchestrator | 2026-02-05 01:06:26.450673 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-02-05 01:06:26.450677 | orchestrator | Thursday 05 February 2026 01:00:06 +0000 (0:00:00.067) 0:00:00.067 ***** 2026-02-05 01:06:26.450682 | orchestrator | changed: [localhost] 2026-02-05 01:06:26.450686 | orchestrator | 2026-02-05 01:06:26.450690 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-02-05 01:06:26.450694 | orchestrator | Thursday 05 February 2026 01:00:07 +0000 (0:00:00.736) 0:00:00.804 ***** 2026-02-05 01:06:26.450698 | orchestrator | 2026-02-05 01:06:26.450714 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-05 01:06:26.450719 | orchestrator | 2026-02-05 01:06:26.450723 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-05 01:06:26.450726 | orchestrator | 2026-02-05 01:06:26.450730 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-05 01:06:26.450734 | orchestrator | 2026-02-05 01:06:26.450738 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-05 01:06:26.450742 | orchestrator | 2026-02-05 01:06:26.450745 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-05 01:06:26.450749 | orchestrator | changed: [localhost] 2026-02-05 01:06:26.450753 | orchestrator | 2026-02-05 01:06:26.450757 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-02-05 01:06:26.450761 | orchestrator | Thursday 05 February 2026 01:05:03 +0000 (0:04:56.337) 0:04:57.142 ***** 2026-02-05 01:06:26.450764 | orchestrator | changed: [localhost] 2026-02-05 01:06:26.450768 | orchestrator | 2026-02-05 01:06:26.450772 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:06:26.450776 | orchestrator | 2026-02-05 01:06:26.450780 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:06:26.450783 | orchestrator | Thursday 05 February 2026 01:05:15 +0000 (0:00:11.479) 0:05:08.621 ***** 2026-02-05 01:06:26.450787 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:06:26.450791 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:06:26.450795 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:06:26.450798 | orchestrator | 2026-02-05 01:06:26.450802 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:06:26.450806 | orchestrator | Thursday 05 February 2026 01:05:15 +0000 (0:00:00.341) 0:05:08.962 ***** 2026-02-05 01:06:26.450810 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-02-05 01:06:26.450814 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-02-05 01:06:26.450817 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-02-05 01:06:26.450821 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-02-05 01:06:26.450825 | orchestrator | 2026-02-05 01:06:26.450829 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-02-05 01:06:26.450833 | orchestrator | skipping: no hosts matched 2026-02-05 01:06:26.450837 | orchestrator | 2026-02-05 01:06:26.450840 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:06:26.450851 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:06:26.450856 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:06:26.450861 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:06:26.450865 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:06:26.450869 | orchestrator | 2026-02-05 01:06:26.450872 | orchestrator | 2026-02-05 01:06:26.450876 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:06:26.450880 | orchestrator | Thursday 05 February 2026 01:05:16 +0000 (0:00:00.688) 0:05:09.650 ***** 2026-02-05 01:06:26.450884 | orchestrator | =============================================================================== 2026-02-05 01:06:26.450887 | orchestrator | Download ironic-agent initramfs --------------------------------------- 296.34s 2026-02-05 01:06:26.450891 | orchestrator | Download ironic-agent kernel ------------------------------------------- 11.48s 2026-02-05 01:06:26.450895 | orchestrator | Ensure the destination directory exists --------------------------------- 0.74s 2026-02-05 01:06:26.450898 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2026-02-05 01:06:26.450906 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-02-05 01:06:26.450910 | orchestrator | 2026-02-05 01:06:26.450913 | orchestrator | 2026-02-05 01:06:26.450917 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:06:26.450921 | orchestrator | 2026-02-05 01:06:26.450925 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:06:26.450928 | orchestrator | Thursday 05 February 2026 01:03:27 +0000 (0:00:00.302) 0:00:00.302 ***** 2026-02-05 01:06:26.450932 | orchestrator | ok: [testbed-manager] 2026-02-05 01:06:26.450936 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:06:26.450940 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:06:26.450944 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:06:26.450947 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:06:26.450951 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:06:26.450955 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:06:26.450959 | orchestrator | 2026-02-05 01:06:26.450962 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:06:26.450966 | orchestrator | Thursday 05 February 2026 01:03:28 +0000 (0:00:00.702) 0:00:01.005 ***** 2026-02-05 01:06:26.450970 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-05 01:06:26.450974 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-05 01:06:26.450984 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-05 01:06:26.450988 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-05 01:06:26.450992 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-05 01:06:26.450995 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-05 01:06:26.451003 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-05 01:06:26.451007 | orchestrator | 2026-02-05 01:06:26.451011 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-05 01:06:26.451015 | orchestrator | 2026-02-05 01:06:26.451019 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-05 01:06:26.451022 | orchestrator | Thursday 05 February 2026 01:03:28 +0000 (0:00:00.616) 0:00:01.621 ***** 2026-02-05 01:06:26.451026 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:06:26.451031 | orchestrator | 2026-02-05 01:06:26.451034 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-05 01:06:26.451038 | orchestrator | Thursday 05 February 2026 01:03:29 +0000 (0:00:01.225) 0:00:02.847 ***** 2026-02-05 01:06:26.451048 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 01:06:26.451054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.451065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.451069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.451073 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.451080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451094 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451099 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.451108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451118 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.451122 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.451133 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 01:06:26.451139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451143 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451152 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451160 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451171 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451180 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451188 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451193 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451214 | orchestrator | 2026-02-05 01:06:26.451218 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-05 01:06:26.451223 | orchestrator | Thursday 05 February 2026 01:03:32 +0000 (0:00:02.870) 0:00:05.717 ***** 2026-02-05 01:06:26.451227 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:06:26.451231 | orchestrator | 2026-02-05 01:06:26.451235 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-05 01:06:26.451239 | orchestrator | Thursday 05 February 2026 01:03:34 +0000 (0:00:01.213) 0:00:06.931 ***** 2026-02-05 01:06:26.451243 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 01:06:26.451251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.451258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.451263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.451270 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.451276 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.451283 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.451288 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.451294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451329 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451336 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451343 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451353 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451378 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451428 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451435 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451444 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 01:06:26.451449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.451469 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.451489 | orchestrator | 2026-02-05 01:06:26.451496 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-05 01:06:26.451501 | orchestrator | Thursday 05 February 2026 01:03:39 +0000 (0:00:05.043) 0:00:11.974 ***** 2026-02-05 01:06:26.451505 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-05 01:06:26.451512 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:06:26.451517 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.451524 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-05 01:06:26.451530 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:06:26.451551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.451577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451584 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:06:26.451594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:06:26.451601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.451630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:06:26.451644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.451667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451673 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:26.451680 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:26.451686 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:26.451692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:06:26.451703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.451714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.451721 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:06:26.451728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:06:26.451734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.451741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.451748 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:06:26.451757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:06:26.451764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.451771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.451782 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:06:26.451792 | orchestrator | 2026-02-05 01:06:26.451798 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-05 01:06:26.451805 | orchestrator | Thursday 05 February 2026 01:03:40 +0000 (0:00:01.614) 0:00:13.589 ***** 2026-02-05 01:06:26.451815 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-05 01:06:26.451822 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:06:26.451829 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.451838 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-05 01:06:26.451843 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451852 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:06:26.451856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:06:26.451862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.451877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451881 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:26.451887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:06:26.451891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.451908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451912 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:26.451916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:06:26.451920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.451942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-05 01:06:26.451954 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:26.451958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:06:26.451962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.451968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.451972 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:06:26.451976 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:06:26.451980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.451984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.451988 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:06:26.451994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-05 01:06:26.452001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.452008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-05 01:06:26.452014 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:06:26.452020 | orchestrator | 2026-02-05 01:06:26.452026 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-05 01:06:26.452032 | orchestrator | Thursday 05 February 2026 01:03:42 +0000 (0:00:01.712) 0:00:15.301 ***** 2026-02-05 01:06:26.452043 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 01:06:26.452050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.452056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.452064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.452071 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.452078 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.452082 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.452086 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.452092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.452096 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.452101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.452105 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.452113 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.452117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.452121 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.452128 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 01:06:26.452132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.452136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.452140 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.452149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.452153 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.452157 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.452161 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.452168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.452173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.452177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.452183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.452189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.452193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.452197 | orchestrator | 2026-02-05 01:06:26.452201 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-05 01:06:26.452205 | orchestrator | Thursday 05 February 2026 01:03:48 +0000 (0:00:06.417) 0:00:21.719 ***** 2026-02-05 01:06:26.452209 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 01:06:26.452212 | orchestrator | 2026-02-05 01:06:26.452216 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-05 01:06:26.452220 | orchestrator | Thursday 05 February 2026 01:03:49 +0000 (0:00:01.122) 0:00:22.842 ***** 2026-02-05 01:06:26.452224 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093084, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8852215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452230 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093084, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8852215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452235 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093084, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8852215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452242 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093084, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8852215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452248 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093084, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8852215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452252 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093084, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8852215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:06:26.452256 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093115, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8916986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452261 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093115, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8916986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452267 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093115, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8916986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452271 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1093084, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8852215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452278 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1093074, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8832214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452282 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093115, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8916986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452288 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1093074, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8832214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452292 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093115, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8916986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452296 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093115, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8916986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452305 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1093074, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8832214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452311 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1093074, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8832214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452322 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093107, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8899999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452329 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1093074, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8832214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452339 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1093074, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8832214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452346 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1093066, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8829968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452352 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093107, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8899999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452363 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093107, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8899999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452370 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093085, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8858473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452381 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093107, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8899999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452403 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1093115, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8916986, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:06:26.452414 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1093066, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8829968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452428 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093098, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8894737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452435 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1093066, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8829968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452446 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093107, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8899999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452458 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093085, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8858473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452468 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093107, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8899999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452474 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093089, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8862183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452484 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1093066, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8829968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452490 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1093066, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8829968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452496 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093085, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8858473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452506 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093085, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8858473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452518 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093085, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8858473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452524 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093098, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8894737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452529 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1093074, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8832214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:06:26.452538 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093098, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8894737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452545 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1093066, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8829968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452552 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093083, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8849947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452563 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093098, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8894737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452574 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093089, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8862183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452580 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093083, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8849947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452586 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093089, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8862183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452595 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093113, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8912525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452606 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093098, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8894737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452612 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1093107, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8899999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:06:26.452625 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093089, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8862183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452631 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093085, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8858473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452638 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093113, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8912525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452644 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093060, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8812459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452655 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093060, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8812459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452662 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093083, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8849947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452669 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093098, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8894737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452683 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093083, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8849947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452689 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093089, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8862183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452695 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093127, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.893379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452701 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093127, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.893379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452708 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093113, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8912525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452714 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093110, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8909595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452748 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1093066, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8829968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:06:26.452777 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093113, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8912525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452788 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093089, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8862183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452795 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093083, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8849947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452845 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093073, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8829968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452857 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093110, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8909595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452864 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093060, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8812459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452870 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093060, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8812459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452887 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093083, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8849947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452894 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093113, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8912525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452905 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093127, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.893379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452912 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1093063, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.881601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452922 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093127, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.893379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452928 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093113, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8912525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452939 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093110, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8909595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452949 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093060, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8812459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452956 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093073, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8829968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452963 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093060, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8812459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452970 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093110, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8909595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452981 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093073, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8829968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452986 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093127, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.893379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.452993 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093092, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8871648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453002 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093127, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.893379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453010 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1093085, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8858473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:06:26.453017 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093110, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8909595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453023 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093073, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8829968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453033 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1093063, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.881601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453040 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1093063, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.881601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453050 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093110, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8909595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453058 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093091, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8862216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453068 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093073, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8829968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453075 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093092, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8871648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453081 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093092, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8871648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453090 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1093063, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.881601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453102 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093091, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8862216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453108 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093091, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8862216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453115 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093125, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8929338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453125 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1093063, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.881601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453132 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:26.453139 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093125, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8929338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453146 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:06:26.453152 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1093098, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8894737, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:06:26.453162 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093073, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8829968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453172 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093125, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8929338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453176 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:06:26.453180 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093092, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8871648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453190 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093092, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8871648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453196 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1093063, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.881601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453202 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093091, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8862216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453210 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093092, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8871648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453223 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093091, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8862216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453234 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093091, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8862216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453240 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093125, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8929338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453246 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:26.453253 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093125, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8929338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453263 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:26.453270 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1093089, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8862183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:06:26.453276 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093125, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8929338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-05 01:06:26.453283 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:06:26.453290 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1093083, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8849947, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:06:26.453307 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093113, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8912525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:06:26.453315 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093060, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8812459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:06:26.453323 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1093127, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.893379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:06:26.453333 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1093110, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8909595, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:06:26.453341 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1093073, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8829968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:06:26.453347 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1093063, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.881601, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:06:26.453354 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1093092, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8871648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:06:26.453369 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1093091, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8862216, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:06:26.453376 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1093125, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8929338, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-05 01:06:26.453383 | orchestrator | 2026-02-05 01:06:26.453436 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-05 01:06:26.453444 | orchestrator | Thursday 05 February 2026 01:04:13 +0000 (0:00:23.272) 0:00:46.114 ***** 2026-02-05 01:06:26.453450 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 01:06:26.453456 | orchestrator | 2026-02-05 01:06:26.453462 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-05 01:06:26.453468 | orchestrator | Thursday 05 February 2026 01:04:13 +0000 (0:00:00.676) 0:00:46.790 ***** 2026-02-05 01:06:26.453474 | orchestrator | [WARNING]: Skipped 2026-02-05 01:06:26.453481 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:06:26.453487 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-05 01:06:26.453494 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:06:26.453500 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-05 01:06:26.453507 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 01:06:26.453514 | orchestrator | [WARNING]: Skipped 2026-02-05 01:06:26.453521 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:06:26.453528 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-05 01:06:26.453534 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:06:26.453541 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-05 01:06:26.453548 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-05 01:06:26.453554 | orchestrator | [WARNING]: Skipped 2026-02-05 01:06:26.453567 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:06:26.453574 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-05 01:06:26.453581 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:06:26.453587 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-05 01:06:26.453594 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:06:26.453601 | orchestrator | [WARNING]: Skipped 2026-02-05 01:06:26.453608 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:06:26.453614 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-05 01:06:26.453626 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:06:26.453633 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-05 01:06:26.453639 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-05 01:06:26.453646 | orchestrator | [WARNING]: Skipped 2026-02-05 01:06:26.453652 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:06:26.453659 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-05 01:06:26.453666 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:06:26.453673 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-05 01:06:26.453678 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 01:06:26.453681 | orchestrator | [WARNING]: Skipped 2026-02-05 01:06:26.453686 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:06:26.453689 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-05 01:06:26.453693 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:06:26.453697 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-05 01:06:26.453701 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 01:06:26.453705 | orchestrator | [WARNING]: Skipped 2026-02-05 01:06:26.453709 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:06:26.453713 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-05 01:06:26.453717 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-05 01:06:26.453720 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-05 01:06:26.453724 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 01:06:26.453728 | orchestrator | 2026-02-05 01:06:26.453732 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-05 01:06:26.453736 | orchestrator | Thursday 05 February 2026 01:04:15 +0000 (0:00:01.858) 0:00:48.648 ***** 2026-02-05 01:06:26.453740 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-05 01:06:26.453744 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-05 01:06:26.453751 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:26.453756 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-05 01:06:26.453759 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:26.453763 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:26.453767 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-05 01:06:26.453771 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:06:26.453774 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-05 01:06:26.453778 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:06:26.453782 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-05 01:06:26.453786 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:06:26.453790 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-05 01:06:26.453793 | orchestrator | 2026-02-05 01:06:26.453797 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-05 01:06:26.453801 | orchestrator | Thursday 05 February 2026 01:04:30 +0000 (0:00:14.626) 0:01:03.275 ***** 2026-02-05 01:06:26.453805 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-05 01:06:26.453809 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-05 01:06:26.453813 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:26.453816 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:26.453823 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-05 01:06:26.453827 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:26.453831 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-05 01:06:26.453834 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:06:26.453838 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-05 01:06:26.453842 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:06:26.453846 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-05 01:06:26.453850 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:06:26.453853 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-05 01:06:26.453857 | orchestrator | 2026-02-05 01:06:26.453865 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file2026-02-05 01:06:26 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:06:26.453870 | orchestrator | 2026-02-05 01:06:26 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:26.453874 | orchestrator | ] *********** 2026-02-05 01:06:26.453878 | orchestrator | Thursday 05 February 2026 01:04:33 +0000 (0:00:02.918) 0:01:06.193 ***** 2026-02-05 01:06:26.453882 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-05 01:06:26.453887 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:26.453891 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-05 01:06:26.453895 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:26.453899 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-05 01:06:26.453903 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:26.453907 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-05 01:06:26.453910 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:06:26.453914 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-05 01:06:26.453918 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:06:26.453922 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-05 01:06:26.453926 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:06:26.453930 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-05 01:06:26.453933 | orchestrator | 2026-02-05 01:06:26.453937 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-05 01:06:26.453941 | orchestrator | Thursday 05 February 2026 01:04:34 +0000 (0:00:01.421) 0:01:07.615 ***** 2026-02-05 01:06:26.453945 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 01:06:26.453949 | orchestrator | 2026-02-05 01:06:26.453953 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-05 01:06:26.453956 | orchestrator | Thursday 05 February 2026 01:04:35 +0000 (0:00:00.784) 0:01:08.399 ***** 2026-02-05 01:06:26.453960 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:06:26.453964 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:26.453968 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:26.453972 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:26.453976 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:06:26.453979 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:06:26.453983 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:06:26.453989 | orchestrator | 2026-02-05 01:06:26.453995 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-05 01:06:26.453999 | orchestrator | Thursday 05 February 2026 01:04:36 +0000 (0:00:00.604) 0:01:09.004 ***** 2026-02-05 01:06:26.454003 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:06:26.454007 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:06:26.454011 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:06:26.454051 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:06:26.454061 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:06:26.454068 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:06:26.454073 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:06:26.454080 | orchestrator | 2026-02-05 01:06:26.454086 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-05 01:06:26.454092 | orchestrator | Thursday 05 February 2026 01:04:38 +0000 (0:00:02.156) 0:01:11.161 ***** 2026-02-05 01:06:26.454099 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 01:06:26.454106 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:26.454112 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 01:06:26.454119 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:06:26.454126 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 01:06:26.454131 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:26.454138 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 01:06:26.454144 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:26.454150 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 01:06:26.454156 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:06:26.454163 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 01:06:26.454169 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:06:26.454176 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-05 01:06:26.454183 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:06:26.454189 | orchestrator | 2026-02-05 01:06:26.454196 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-05 01:06:26.454203 | orchestrator | Thursday 05 February 2026 01:04:39 +0000 (0:00:01.485) 0:01:12.646 ***** 2026-02-05 01:06:26.454208 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-05 01:06:26.454212 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:26.454220 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-05 01:06:26.454224 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:26.454228 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-05 01:06:26.454232 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:06:26.454236 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-05 01:06:26.454240 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:26.454243 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-05 01:06:26.454247 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:06:26.454251 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-05 01:06:26.454255 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:06:26.454259 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-05 01:06:26.454262 | orchestrator | 2026-02-05 01:06:26.454269 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-05 01:06:26.454273 | orchestrator | Thursday 05 February 2026 01:04:41 +0000 (0:00:01.444) 0:01:14.091 ***** 2026-02-05 01:06:26.454277 | orchestrator | [WARNING]: Skipped 2026-02-05 01:06:26.454281 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-05 01:06:26.454285 | orchestrator | due to this access issue: 2026-02-05 01:06:26.454289 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-05 01:06:26.454293 | orchestrator | not a directory 2026-02-05 01:06:26.454297 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-05 01:06:26.454301 | orchestrator | 2026-02-05 01:06:26.454305 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-05 01:06:26.454309 | orchestrator | Thursday 05 February 2026 01:04:42 +0000 (0:00:01.080) 0:01:15.172 ***** 2026-02-05 01:06:26.454312 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:06:26.454316 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:26.454320 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:26.454324 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:26.454328 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:06:26.454331 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:06:26.454335 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:06:26.454339 | orchestrator | 2026-02-05 01:06:26.454343 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-05 01:06:26.454347 | orchestrator | Thursday 05 February 2026 01:04:43 +0000 (0:00:00.752) 0:01:15.924 ***** 2026-02-05 01:06:26.454350 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:06:26.454354 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:06:26.454358 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:06:26.454362 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:06:26.454365 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:06:26.454369 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:06:26.454375 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:06:26.454379 | orchestrator | 2026-02-05 01:06:26.454383 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-05 01:06:26.454410 | orchestrator | Thursday 05 February 2026 01:04:43 +0000 (0:00:00.822) 0:01:16.746 ***** 2026-02-05 01:06:26.454416 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-05 01:06:26.454422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.454429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.454437 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.454441 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.454445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.454449 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.454457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-05 01:06:26.454462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.454466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.454473 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.454481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.454485 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.454489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.454495 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.454499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.454504 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-05 01:06:26.454514 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.454518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.454522 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.454526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.454531 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.454536 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.454541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.454545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.454553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-05 01:06:26.454557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.454562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.454566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-05 01:06:26.454570 | orchestrator | 2026-02-05 01:06:26.454574 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-05 01:06:26.454578 | orchestrator | Thursday 05 February 2026 01:04:49 +0000 (0:00:05.199) 0:01:21.946 ***** 2026-02-05 01:06:26.454582 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-05 01:06:26.454585 | orchestrator | skipping: [testbed-manager] 2026-02-05 01:06:26.454589 | orchestrator | 2026-02-05 01:06:26.454593 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 01:06:26.454597 | orchestrator | Thursday 05 February 2026 01:04:50 +0000 (0:00:01.021) 0:01:22.967 ***** 2026-02-05 01:06:26.454601 | orchestrator | 2026-02-05 01:06:26.454605 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 01:06:26.454610 | orchestrator | Thursday 05 February 2026 01:04:50 +0000 (0:00:00.060) 0:01:23.027 ***** 2026-02-05 01:06:26.454614 | orchestrator | 2026-02-05 01:06:26.454618 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 01:06:26.454622 | orchestrator | Thursday 05 February 2026 01:04:50 +0000 (0:00:00.062) 0:01:23.090 ***** 2026-02-05 01:06:26.454626 | orchestrator | 2026-02-05 01:06:26.454629 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 01:06:26.454633 | orchestrator | Thursday 05 February 2026 01:04:50 +0000 (0:00:00.061) 0:01:23.151 ***** 2026-02-05 01:06:26.454637 | orchestrator | 2026-02-05 01:06:26.454641 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 01:06:26.454647 | orchestrator | Thursday 05 February 2026 01:04:50 +0000 (0:00:00.163) 0:01:23.315 ***** 2026-02-05 01:06:26.454651 | orchestrator | 2026-02-05 01:06:26.454655 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 01:06:26.454658 | orchestrator | Thursday 05 February 2026 01:04:50 +0000 (0:00:00.059) 0:01:23.374 ***** 2026-02-05 01:06:26.454662 | orchestrator | 2026-02-05 01:06:26.454666 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-05 01:06:26.454670 | orchestrator | Thursday 05 February 2026 01:04:50 +0000 (0:00:00.059) 0:01:23.433 ***** 2026-02-05 01:06:26.454674 | orchestrator | 2026-02-05 01:06:26.454678 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-05 01:06:26.454682 | orchestrator | Thursday 05 February 2026 01:04:50 +0000 (0:00:00.088) 0:01:23.522 ***** 2026-02-05 01:06:26.454685 | orchestrator | changed: [testbed-manager] 2026-02-05 01:06:26.454689 | orchestrator | 2026-02-05 01:06:26.454693 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-05 01:06:26.454697 | orchestrator | Thursday 05 February 2026 01:05:06 +0000 (0:00:15.800) 0:01:39.323 ***** 2026-02-05 01:06:26.454701 | orchestrator | changed: [testbed-manager] 2026-02-05 01:06:26.454705 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:06:26.454708 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:06:26.454712 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:06:26.454716 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:06:26.454720 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:06:26.454724 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:06:26.454727 | orchestrator | 2026-02-05 01:06:26.454731 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-05 01:06:26.454735 | orchestrator | Thursday 05 February 2026 01:05:19 +0000 (0:00:13.457) 0:01:52.780 ***** 2026-02-05 01:06:26.454739 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:06:26.454743 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:06:26.454747 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:06:26.454750 | orchestrator | 2026-02-05 01:06:26.454754 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-05 01:06:26.454761 | orchestrator | Thursday 05 February 2026 01:05:29 +0000 (0:00:10.069) 0:02:02.850 ***** 2026-02-05 01:06:26.454765 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:06:26.454769 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:06:26.454773 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:06:26.454776 | orchestrator | 2026-02-05 01:06:26.454780 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-05 01:06:26.454784 | orchestrator | Thursday 05 February 2026 01:05:35 +0000 (0:00:05.945) 0:02:08.795 ***** 2026-02-05 01:06:26.454788 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:06:26.454792 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:06:26.454795 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:06:26.454799 | orchestrator | changed: [testbed-manager] 2026-02-05 01:06:26.454803 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:06:26.454807 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:06:26.454811 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:06:26.454815 | orchestrator | 2026-02-05 01:06:26.454818 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-05 01:06:26.454822 | orchestrator | Thursday 05 February 2026 01:05:51 +0000 (0:00:15.804) 0:02:24.600 ***** 2026-02-05 01:06:26.454826 | orchestrator | changed: [testbed-manager] 2026-02-05 01:06:26.454830 | orchestrator | 2026-02-05 01:06:26.454834 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-05 01:06:26.454838 | orchestrator | Thursday 05 February 2026 01:05:58 +0000 (0:00:06.326) 0:02:30.927 ***** 2026-02-05 01:06:26.454841 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:06:26.454845 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:06:26.454849 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:06:26.454853 | orchestrator | 2026-02-05 01:06:26.454857 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-05 01:06:26.454863 | orchestrator | Thursday 05 February 2026 01:06:09 +0000 (0:00:11.522) 0:02:42.449 ***** 2026-02-05 01:06:26.454867 | orchestrator | changed: [testbed-manager] 2026-02-05 01:06:26.454870 | orchestrator | 2026-02-05 01:06:26.454874 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-05 01:06:26.454878 | orchestrator | Thursday 05 February 2026 01:06:19 +0000 (0:00:09.656) 0:02:52.106 ***** 2026-02-05 01:06:26.454882 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:06:26.454886 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:06:26.454890 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:06:26.454894 | orchestrator | 2026-02-05 01:06:26.454897 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:06:26.454902 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-05 01:06:26.454906 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-05 01:06:26.454910 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-05 01:06:26.454916 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-05 01:06:26.454920 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-05 01:06:26.454924 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-05 01:06:26.454928 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-05 01:06:26.454932 | orchestrator | 2026-02-05 01:06:26.454936 | orchestrator | 2026-02-05 01:06:26.454940 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:06:26.454943 | orchestrator | Thursday 05 February 2026 01:06:23 +0000 (0:00:04.608) 0:02:56.715 ***** 2026-02-05 01:06:26.454947 | orchestrator | =============================================================================== 2026-02-05 01:06:26.454951 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.27s 2026-02-05 01:06:26.454955 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.80s 2026-02-05 01:06:26.454959 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 15.80s 2026-02-05 01:06:26.454963 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.63s 2026-02-05 01:06:26.454967 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.46s 2026-02-05 01:06:26.454971 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.52s 2026-02-05 01:06:26.454974 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.07s 2026-02-05 01:06:26.454978 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 9.66s 2026-02-05 01:06:26.454982 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.42s 2026-02-05 01:06:26.454986 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.33s 2026-02-05 01:06:26.454990 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.95s 2026-02-05 01:06:26.454993 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.20s 2026-02-05 01:06:26.454997 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.04s 2026-02-05 01:06:26.455001 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 4.61s 2026-02-05 01:06:26.455007 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.92s 2026-02-05 01:06:26.455015 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.87s 2026-02-05 01:06:26.455019 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.16s 2026-02-05 01:06:26.455023 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.86s 2026-02-05 01:06:26.455027 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 1.71s 2026-02-05 01:06:26.455031 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.61s 2026-02-05 01:06:29.483590 | orchestrator | 2026-02-05 01:06:29 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:29.484745 | orchestrator | 2026-02-05 01:06:29 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:29.485520 | orchestrator | 2026-02-05 01:06:29 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:29.486713 | orchestrator | 2026-02-05 01:06:29 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:06:29.486899 | orchestrator | 2026-02-05 01:06:29 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:32.531021 | orchestrator | 2026-02-05 01:06:32 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:32.532809 | orchestrator | 2026-02-05 01:06:32 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:32.534605 | orchestrator | 2026-02-05 01:06:32 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:32.536295 | orchestrator | 2026-02-05 01:06:32 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:06:32.536337 | orchestrator | 2026-02-05 01:06:32 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:35.578292 | orchestrator | 2026-02-05 01:06:35 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:35.579335 | orchestrator | 2026-02-05 01:06:35 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:35.580930 | orchestrator | 2026-02-05 01:06:35 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:35.582361 | orchestrator | 2026-02-05 01:06:35 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:06:35.582445 | orchestrator | 2026-02-05 01:06:35 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:38.628903 | orchestrator | 2026-02-05 01:06:38 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:38.631143 | orchestrator | 2026-02-05 01:06:38 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:38.632943 | orchestrator | 2026-02-05 01:06:38 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:38.634726 | orchestrator | 2026-02-05 01:06:38 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:06:38.634793 | orchestrator | 2026-02-05 01:06:38 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:41.681046 | orchestrator | 2026-02-05 01:06:41 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:41.683560 | orchestrator | 2026-02-05 01:06:41 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:41.686416 | orchestrator | 2026-02-05 01:06:41 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:41.689155 | orchestrator | 2026-02-05 01:06:41 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:06:41.689208 | orchestrator | 2026-02-05 01:06:41 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:44.737265 | orchestrator | 2026-02-05 01:06:44 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:44.737996 | orchestrator | 2026-02-05 01:06:44 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:44.739583 | orchestrator | 2026-02-05 01:06:44 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:44.741576 | orchestrator | 2026-02-05 01:06:44 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:06:44.741620 | orchestrator | 2026-02-05 01:06:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:47.789578 | orchestrator | 2026-02-05 01:06:47 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:47.790506 | orchestrator | 2026-02-05 01:06:47 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:47.792254 | orchestrator | 2026-02-05 01:06:47 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:47.793717 | orchestrator | 2026-02-05 01:06:47 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:06:47.793754 | orchestrator | 2026-02-05 01:06:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:50.839400 | orchestrator | 2026-02-05 01:06:50 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:50.841719 | orchestrator | 2026-02-05 01:06:50 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:50.842964 | orchestrator | 2026-02-05 01:06:50 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:50.844547 | orchestrator | 2026-02-05 01:06:50 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:06:50.844584 | orchestrator | 2026-02-05 01:06:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:53.885239 | orchestrator | 2026-02-05 01:06:53 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:53.887078 | orchestrator | 2026-02-05 01:06:53 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:53.888622 | orchestrator | 2026-02-05 01:06:53 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:53.889920 | orchestrator | 2026-02-05 01:06:53 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:06:53.890053 | orchestrator | 2026-02-05 01:06:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:56.931849 | orchestrator | 2026-02-05 01:06:56 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:56.933594 | orchestrator | 2026-02-05 01:06:56 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:56.935714 | orchestrator | 2026-02-05 01:06:56 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:56.937467 | orchestrator | 2026-02-05 01:06:56 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:06:56.937528 | orchestrator | 2026-02-05 01:06:56 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:06:59.991624 | orchestrator | 2026-02-05 01:06:59 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:06:59.993219 | orchestrator | 2026-02-05 01:06:59 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:06:59.994780 | orchestrator | 2026-02-05 01:06:59 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:06:59.996624 | orchestrator | 2026-02-05 01:06:59 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:06:59.996668 | orchestrator | 2026-02-05 01:06:59 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:03.071858 | orchestrator | 2026-02-05 01:07:03 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:07:03.072193 | orchestrator | 2026-02-05 01:07:03 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:03.073076 | orchestrator | 2026-02-05 01:07:03 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:07:03.073849 | orchestrator | 2026-02-05 01:07:03 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:03.073899 | orchestrator | 2026-02-05 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:06.110869 | orchestrator | 2026-02-05 01:07:06 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:07:06.111403 | orchestrator | 2026-02-05 01:07:06 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:06.111924 | orchestrator | 2026-02-05 01:07:06 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:07:06.113424 | orchestrator | 2026-02-05 01:07:06 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:06.114096 | orchestrator | 2026-02-05 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:09.144680 | orchestrator | 2026-02-05 01:07:09 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:07:09.146676 | orchestrator | 2026-02-05 01:07:09 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:09.148637 | orchestrator | 2026-02-05 01:07:09 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:07:09.150111 | orchestrator | 2026-02-05 01:07:09 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:09.150478 | orchestrator | 2026-02-05 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:12.184412 | orchestrator | 2026-02-05 01:07:12 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:07:12.184455 | orchestrator | 2026-02-05 01:07:12 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:12.184485 | orchestrator | 2026-02-05 01:07:12 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:07:12.185294 | orchestrator | 2026-02-05 01:07:12 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:12.185341 | orchestrator | 2026-02-05 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:15.218639 | orchestrator | 2026-02-05 01:07:15 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:07:15.219247 | orchestrator | 2026-02-05 01:07:15 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:15.219831 | orchestrator | 2026-02-05 01:07:15 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:07:15.220662 | orchestrator | 2026-02-05 01:07:15 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:15.220694 | orchestrator | 2026-02-05 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:18.255859 | orchestrator | 2026-02-05 01:07:18 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:07:18.257659 | orchestrator | 2026-02-05 01:07:18 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:18.260065 | orchestrator | 2026-02-05 01:07:18 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:07:18.261692 | orchestrator | 2026-02-05 01:07:18 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:18.262113 | orchestrator | 2026-02-05 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:21.309584 | orchestrator | 2026-02-05 01:07:21 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:07:21.310325 | orchestrator | 2026-02-05 01:07:21 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:21.311373 | orchestrator | 2026-02-05 01:07:21 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:07:21.312552 | orchestrator | 2026-02-05 01:07:21 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:21.312717 | orchestrator | 2026-02-05 01:07:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:24.355800 | orchestrator | 2026-02-05 01:07:24 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:07:24.357323 | orchestrator | 2026-02-05 01:07:24 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:24.359292 | orchestrator | 2026-02-05 01:07:24 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:07:24.360956 | orchestrator | 2026-02-05 01:07:24 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:24.361134 | orchestrator | 2026-02-05 01:07:24 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:27.405330 | orchestrator | 2026-02-05 01:07:27 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state STARTED 2026-02-05 01:07:27.406694 | orchestrator | 2026-02-05 01:07:27 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:27.408042 | orchestrator | 2026-02-05 01:07:27 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:07:27.409389 | orchestrator | 2026-02-05 01:07:27 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:27.409685 | orchestrator | 2026-02-05 01:07:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:30.457220 | orchestrator | 2026-02-05 01:07:30.457270 | orchestrator | 2026-02-05 01:07:30.457277 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:07:30.457283 | orchestrator | 2026-02-05 01:07:30.457289 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:07:30.457295 | orchestrator | Thursday 05 February 2026 01:04:41 +0000 (0:00:00.295) 0:00:00.295 ***** 2026-02-05 01:07:30.457323 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:07:30.457330 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:07:30.457335 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:07:30.457382 | orchestrator | 2026-02-05 01:07:30.457388 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:07:30.457393 | orchestrator | Thursday 05 February 2026 01:04:41 +0000 (0:00:00.226) 0:00:00.522 ***** 2026-02-05 01:07:30.457398 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-05 01:07:30.457404 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-05 01:07:30.457410 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-05 01:07:30.457415 | orchestrator | 2026-02-05 01:07:30.457420 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-05 01:07:30.457425 | orchestrator | 2026-02-05 01:07:30.457431 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-05 01:07:30.457451 | orchestrator | Thursday 05 February 2026 01:04:41 +0000 (0:00:00.362) 0:00:00.885 ***** 2026-02-05 01:07:30.457455 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:07:30.457459 | orchestrator | 2026-02-05 01:07:30.457462 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-05 01:07:30.457466 | orchestrator | Thursday 05 February 2026 01:04:42 +0000 (0:00:00.407) 0:00:01.292 ***** 2026-02-05 01:07:30.457469 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-05 01:07:30.457472 | orchestrator | 2026-02-05 01:07:30.457475 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-05 01:07:30.457478 | orchestrator | Thursday 05 February 2026 01:04:45 +0000 (0:00:03.537) 0:00:04.829 ***** 2026-02-05 01:07:30.457482 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-05 01:07:30.457485 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-05 01:07:30.457488 | orchestrator | 2026-02-05 01:07:30.457491 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-05 01:07:30.457495 | orchestrator | Thursday 05 February 2026 01:04:52 +0000 (0:00:07.266) 0:00:12.096 ***** 2026-02-05 01:07:30.457498 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 01:07:30.457501 | orchestrator | 2026-02-05 01:07:30.457504 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-05 01:07:30.457507 | orchestrator | Thursday 05 February 2026 01:04:56 +0000 (0:00:03.334) 0:00:15.430 ***** 2026-02-05 01:07:30.457511 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-05 01:07:30.457514 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:07:30.457517 | orchestrator | 2026-02-05 01:07:30.457520 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-05 01:07:30.457523 | orchestrator | Thursday 05 February 2026 01:05:00 +0000 (0:00:04.458) 0:00:19.889 ***** 2026-02-05 01:07:30.457526 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 01:07:30.457529 | orchestrator | 2026-02-05 01:07:30.457533 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-05 01:07:30.457542 | orchestrator | Thursday 05 February 2026 01:05:04 +0000 (0:00:03.347) 0:00:23.236 ***** 2026-02-05 01:07:30.457545 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-05 01:07:30.457548 | orchestrator | 2026-02-05 01:07:30.457551 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-05 01:07:30.457574 | orchestrator | Thursday 05 February 2026 01:05:08 +0000 (0:00:04.393) 0:00:27.630 ***** 2026-02-05 01:07:30.457591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:07:30.457600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:07:30.457606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:07:30.457610 | orchestrator | 2026-02-05 01:07:30.457613 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-05 01:07:30.457616 | orchestrator | Thursday 05 February 2026 01:05:13 +0000 (0:00:05.027) 0:00:32.657 ***** 2026-02-05 01:07:30.457622 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:07:30.457626 | orchestrator | 2026-02-05 01:07:30.457629 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-05 01:07:30.457634 | orchestrator | Thursday 05 February 2026 01:05:14 +0000 (0:00:00.597) 0:00:33.255 ***** 2026-02-05 01:07:30.457637 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:07:30.457641 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:30.457644 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:07:30.457647 | orchestrator | 2026-02-05 01:07:30.457650 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-05 01:07:30.457654 | orchestrator | Thursday 05 February 2026 01:05:17 +0000 (0:00:03.225) 0:00:36.480 ***** 2026-02-05 01:07:30.457657 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 01:07:30.457660 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 01:07:30.457663 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 01:07:30.457667 | orchestrator | 2026-02-05 01:07:30.457670 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-05 01:07:30.457675 | orchestrator | Thursday 05 February 2026 01:05:18 +0000 (0:00:01.484) 0:00:37.965 ***** 2026-02-05 01:07:30.457680 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 01:07:30.457685 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 01:07:30.457691 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 01:07:30.457696 | orchestrator | 2026-02-05 01:07:30.457701 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-05 01:07:30.457706 | orchestrator | Thursday 05 February 2026 01:05:19 +0000 (0:00:01.137) 0:00:39.103 ***** 2026-02-05 01:07:30.457712 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:07:30.457715 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:07:30.457719 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:07:30.457724 | orchestrator | 2026-02-05 01:07:30.457729 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-05 01:07:30.457734 | orchestrator | Thursday 05 February 2026 01:05:20 +0000 (0:00:00.752) 0:00:39.855 ***** 2026-02-05 01:07:30.457739 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:30.457744 | orchestrator | 2026-02-05 01:07:30.457749 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-05 01:07:30.457754 | orchestrator | Thursday 05 February 2026 01:05:21 +0000 (0:00:00.455) 0:00:40.311 ***** 2026-02-05 01:07:30.457760 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:30.457765 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:30.457770 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:30.457776 | orchestrator | 2026-02-05 01:07:30.457781 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-05 01:07:30.457786 | orchestrator | Thursday 05 February 2026 01:05:21 +0000 (0:00:00.395) 0:00:40.706 ***** 2026-02-05 01:07:30.457791 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:07:30.457796 | orchestrator | 2026-02-05 01:07:30.457801 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-05 01:07:30.457806 | orchestrator | Thursday 05 February 2026 01:05:21 +0000 (0:00:00.471) 0:00:41.178 ***** 2026-02-05 01:07:30.457815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:07:30.457830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:07:30.457838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:07:30.457848 | orchestrator | 2026-02-05 01:07:30.457854 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-05 01:07:30.457859 | orchestrator | Thursday 05 February 2026 01:05:25 +0000 (0:00:03.950) 0:00:45.128 ***** 2026-02-05 01:07:30.457869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 01:07:30.457876 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:30.457884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 01:07:30.457893 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:30.457903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 01:07:30.457909 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:30.457915 | orchestrator | 2026-02-05 01:07:30.457921 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-05 01:07:30.457926 | orchestrator | Thursday 05 February 2026 01:05:28 +0000 (0:00:02.595) 0:00:47.724 ***** 2026-02-05 01:07:30.457932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 01:07:30.457940 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:30.457952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 01:07:30.457959 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:30.457968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-05 01:07:30.457974 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:30.457979 | orchestrator | 2026-02-05 01:07:30.457984 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-05 01:07:30.457990 | orchestrator | Thursday 05 February 2026 01:05:31 +0000 (0:00:03.137) 0:00:50.861 ***** 2026-02-05 01:07:30.457995 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:30.458004 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:30.458009 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:30.458041 | orchestrator | 2026-02-05 01:07:30.458047 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-05 01:07:30.458052 | orchestrator | Thursday 05 February 2026 01:05:35 +0000 (0:00:03.460) 0:00:54.322 ***** 2026-02-05 01:07:30.458060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:07:30.458071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:07:30.458079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:07:30.458089 | orchestrator | 2026-02-05 01:07:30.458094 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-05 01:07:30.458099 | orchestrator | Thursday 05 February 2026 01:05:42 +0000 (0:00:07.618) 0:01:01.940 ***** 2026-02-05 01:07:30.458105 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:30.458110 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:07:30.458115 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:07:30.458121 | orchestrator | 2026-02-05 01:07:30.458126 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-05 01:07:30.458131 | orchestrator | Thursday 05 February 2026 01:05:49 +0000 (0:00:06.322) 0:01:08.263 ***** 2026-02-05 01:07:30.458137 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:30.458142 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:30.458147 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:30.458152 | orchestrator | 2026-02-05 01:07:30.458158 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-05 01:07:30.458163 | orchestrator | Thursday 05 February 2026 01:05:52 +0000 (0:00:03.234) 0:01:11.497 ***** 2026-02-05 01:07:30.458168 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:30.458173 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:30.458179 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:30.458184 | orchestrator | 2026-02-05 01:07:30.458189 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-05 01:07:30.458222 | orchestrator | Thursday 05 February 2026 01:05:56 +0000 (0:00:03.723) 0:01:15.220 ***** 2026-02-05 01:07:30.458228 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:30.458236 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:30.458242 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:30.458248 | orchestrator | 2026-02-05 01:07:30.458253 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-05 01:07:30.458259 | orchestrator | Thursday 05 February 2026 01:06:01 +0000 (0:00:05.498) 0:01:20.718 ***** 2026-02-05 01:07:30.458264 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:30.458269 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:30.458274 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:30.458280 | orchestrator | 2026-02-05 01:07:30.458285 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-05 01:07:30.458291 | orchestrator | Thursday 05 February 2026 01:06:04 +0000 (0:00:03.097) 0:01:23.816 ***** 2026-02-05 01:07:30.458295 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:30.458298 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:30.458302 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:30.458307 | orchestrator | 2026-02-05 01:07:30.458311 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-05 01:07:30.458314 | orchestrator | Thursday 05 February 2026 01:06:04 +0000 (0:00:00.264) 0:01:24.081 ***** 2026-02-05 01:07:30.458317 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-05 01:07:30.458321 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:30.458325 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-05 01:07:30.458331 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:30.458347 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-05 01:07:30.458353 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:30.458358 | orchestrator | 2026-02-05 01:07:30.458363 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-05 01:07:30.458368 | orchestrator | Thursday 05 February 2026 01:06:08 +0000 (0:00:03.780) 0:01:27.861 ***** 2026-02-05 01:07:30.458373 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:30.458379 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:07:30.458384 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:07:30.458389 | orchestrator | 2026-02-05 01:07:30.458394 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-05 01:07:30.458399 | orchestrator | Thursday 05 February 2026 01:06:13 +0000 (0:00:05.163) 0:01:33.024 ***** 2026-02-05 01:07:30.458408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:07:30.458419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:07:30.458429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-05 01:07:30.458433 | orchestrator | 2026-02-05 01:07:30.458436 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-05 01:07:30.458440 | orchestrator | Thursday 05 February 2026 01:06:17 +0000 (0:00:03.945) 0:01:36.970 ***** 2026-02-05 01:07:30.458443 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:30.458446 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:30.458449 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:30.458452 | orchestrator | 2026-02-05 01:07:30.458455 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-05 01:07:30.458459 | orchestrator | Thursday 05 February 2026 01:06:18 +0000 (0:00:00.250) 0:01:37.220 ***** 2026-02-05 01:07:30.458462 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:30.458465 | orchestrator | 2026-02-05 01:07:30.458468 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-05 01:07:30.458471 | orchestrator | Thursday 05 February 2026 01:06:20 +0000 (0:00:02.616) 0:01:39.837 ***** 2026-02-05 01:07:30.458474 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:30.458478 | orchestrator | 2026-02-05 01:07:30.458481 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-05 01:07:30.458484 | orchestrator | Thursday 05 February 2026 01:06:22 +0000 (0:00:02.152) 0:01:41.990 ***** 2026-02-05 01:07:30.458487 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:30.458490 | orchestrator | 2026-02-05 01:07:30.458493 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-05 01:07:30.458498 | orchestrator | Thursday 05 February 2026 01:06:25 +0000 (0:00:02.234) 0:01:44.224 ***** 2026-02-05 01:07:30.458502 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:30.458505 | orchestrator | 2026-02-05 01:07:30.458508 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-05 01:07:30.458511 | orchestrator | Thursday 05 February 2026 01:06:55 +0000 (0:00:30.187) 0:02:14.411 ***** 2026-02-05 01:07:30.458514 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:30.458517 | orchestrator | 2026-02-05 01:07:30.458521 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-05 01:07:30.458524 | orchestrator | Thursday 05 February 2026 01:06:57 +0000 (0:00:01.974) 0:02:16.386 ***** 2026-02-05 01:07:30.458527 | orchestrator | 2026-02-05 01:07:30.458532 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-05 01:07:30.458535 | orchestrator | Thursday 05 February 2026 01:06:57 +0000 (0:00:00.172) 0:02:16.559 ***** 2026-02-05 01:07:30.458539 | orchestrator | 2026-02-05 01:07:30.458542 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-05 01:07:30.458545 | orchestrator | Thursday 05 February 2026 01:06:57 +0000 (0:00:00.059) 0:02:16.619 ***** 2026-02-05 01:07:30.458548 | orchestrator | 2026-02-05 01:07:30.458551 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-05 01:07:30.458554 | orchestrator | Thursday 05 February 2026 01:06:57 +0000 (0:00:00.057) 0:02:16.677 ***** 2026-02-05 01:07:30.458557 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:30.458561 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:07:30.458564 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:07:30.458567 | orchestrator | 2026-02-05 01:07:30.458570 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:07:30.458574 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-05 01:07:30.458577 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-05 01:07:30.458581 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-05 01:07:30.458584 | orchestrator | 2026-02-05 01:07:30.458587 | orchestrator | 2026-02-05 01:07:30.458590 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:07:30.458593 | orchestrator | Thursday 05 February 2026 01:07:29 +0000 (0:00:31.929) 0:02:48.606 ***** 2026-02-05 01:07:30.458597 | orchestrator | =============================================================================== 2026-02-05 01:07:30.458603 | orchestrator | glance : Restart glance-api container ---------------------------------- 31.93s 2026-02-05 01:07:30.458608 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.19s 2026-02-05 01:07:30.458613 | orchestrator | glance : Copying over config.json files for services -------------------- 7.62s 2026-02-05 01:07:30.458618 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.27s 2026-02-05 01:07:30.458623 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.32s 2026-02-05 01:07:30.458629 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.50s 2026-02-05 01:07:30.458634 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 5.16s 2026-02-05 01:07:30.458639 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.03s 2026-02-05 01:07:30.458644 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.46s 2026-02-05 01:07:30.458649 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.39s 2026-02-05 01:07:30.458655 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.95s 2026-02-05 01:07:30.458660 | orchestrator | glance : Check glance containers ---------------------------------------- 3.95s 2026-02-05 01:07:30.458669 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.78s 2026-02-05 01:07:30.458677 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.72s 2026-02-05 01:07:30.458682 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.54s 2026-02-05 01:07:30.458687 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.46s 2026-02-05 01:07:30.458693 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.35s 2026-02-05 01:07:30.458698 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.33s 2026-02-05 01:07:30.458703 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.23s 2026-02-05 01:07:30.458708 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.23s 2026-02-05 01:07:30.458714 | orchestrator | 2026-02-05 01:07:30 | INFO  | Task c87cf68a-67fa-4ec7-b069-a87f5714eb22 is in state SUCCESS 2026-02-05 01:07:30.458719 | orchestrator | 2026-02-05 01:07:30 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:30.459965 | orchestrator | 2026-02-05 01:07:30 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:07:30.462453 | orchestrator | 2026-02-05 01:07:30 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:30.462482 | orchestrator | 2026-02-05 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:33.503812 | orchestrator | 2026-02-05 01:07:33 | INFO  | Task fa0d2cc5-b623-48df-aeb0-56482851a14f is in state STARTED 2026-02-05 01:07:33.506559 | orchestrator | 2026-02-05 01:07:33 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:33.508071 | orchestrator | 2026-02-05 01:07:33 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:07:33.509924 | orchestrator | 2026-02-05 01:07:33 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:33.510124 | orchestrator | 2026-02-05 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:36.550161 | orchestrator | 2026-02-05 01:07:36 | INFO  | Task fa0d2cc5-b623-48df-aeb0-56482851a14f is in state STARTED 2026-02-05 01:07:36.552198 | orchestrator | 2026-02-05 01:07:36 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:36.554441 | orchestrator | 2026-02-05 01:07:36 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:07:36.555933 | orchestrator | 2026-02-05 01:07:36 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:36.556126 | orchestrator | 2026-02-05 01:07:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:39.582261 | orchestrator | 2026-02-05 01:07:39 | INFO  | Task fa0d2cc5-b623-48df-aeb0-56482851a14f is in state STARTED 2026-02-05 01:07:39.583305 | orchestrator | 2026-02-05 01:07:39 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:39.583474 | orchestrator | 2026-02-05 01:07:39 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:07:39.584814 | orchestrator | 2026-02-05 01:07:39 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:39.584844 | orchestrator | 2026-02-05 01:07:39 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:42.631427 | orchestrator | 2026-02-05 01:07:42 | INFO  | Task fa0d2cc5-b623-48df-aeb0-56482851a14f is in state STARTED 2026-02-05 01:07:42.634805 | orchestrator | 2026-02-05 01:07:42 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:42.638176 | orchestrator | 2026-02-05 01:07:42 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state STARTED 2026-02-05 01:07:42.642001 | orchestrator | 2026-02-05 01:07:42 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:42.642080 | orchestrator | 2026-02-05 01:07:42 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:45.685126 | orchestrator | 2026-02-05 01:07:45 | INFO  | Task fa0d2cc5-b623-48df-aeb0-56482851a14f is in state STARTED 2026-02-05 01:07:45.686789 | orchestrator | 2026-02-05 01:07:45 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:45.691038 | orchestrator | 2026-02-05 01:07:45 | INFO  | Task beb49eb2-dfb6-4b18-b4be-ccc6d72ac5fc is in state SUCCESS 2026-02-05 01:07:45.692709 | orchestrator | 2026-02-05 01:07:45.692756 | orchestrator | 2026-02-05 01:07:45.692763 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:07:45.692769 | orchestrator | 2026-02-05 01:07:45.692775 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:07:45.692780 | orchestrator | Thursday 05 February 2026 01:05:00 +0000 (0:00:00.232) 0:00:00.232 ***** 2026-02-05 01:07:45.692786 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:07:45.692791 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:07:45.692797 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:07:45.692803 | orchestrator | 2026-02-05 01:07:45.692818 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:07:45.692824 | orchestrator | Thursday 05 February 2026 01:05:01 +0000 (0:00:00.275) 0:00:00.508 ***** 2026-02-05 01:07:45.692829 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-05 01:07:45.692835 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-05 01:07:45.692841 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-05 01:07:45.692846 | orchestrator | 2026-02-05 01:07:45.692852 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-05 01:07:45.692857 | orchestrator | 2026-02-05 01:07:45.692862 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-05 01:07:45.692868 | orchestrator | Thursday 05 February 2026 01:05:01 +0000 (0:00:00.482) 0:00:00.990 ***** 2026-02-05 01:07:45.692874 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:07:45.692880 | orchestrator | 2026-02-05 01:07:45.692885 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-05 01:07:45.692891 | orchestrator | Thursday 05 February 2026 01:05:02 +0000 (0:00:00.550) 0:00:01.540 ***** 2026-02-05 01:07:45.692896 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-05 01:07:45.692902 | orchestrator | 2026-02-05 01:07:45.692907 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-05 01:07:45.692913 | orchestrator | Thursday 05 February 2026 01:05:05 +0000 (0:00:03.601) 0:00:05.142 ***** 2026-02-05 01:07:45.692918 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-05 01:07:45.692924 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-05 01:07:45.692929 | orchestrator | 2026-02-05 01:07:45.692934 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-05 01:07:45.692939 | orchestrator | Thursday 05 February 2026 01:05:13 +0000 (0:00:07.406) 0:00:12.549 ***** 2026-02-05 01:07:45.692944 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 01:07:45.692949 | orchestrator | 2026-02-05 01:07:45.692954 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-05 01:07:45.692960 | orchestrator | Thursday 05 February 2026 01:05:16 +0000 (0:00:03.062) 0:00:15.611 ***** 2026-02-05 01:07:45.692965 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-05 01:07:45.692982 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:07:45.692987 | orchestrator | 2026-02-05 01:07:45.692993 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-05 01:07:45.692998 | orchestrator | Thursday 05 February 2026 01:05:20 +0000 (0:00:04.097) 0:00:19.708 ***** 2026-02-05 01:07:45.693004 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 01:07:45.693009 | orchestrator | 2026-02-05 01:07:45.693014 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-05 01:07:45.693019 | orchestrator | Thursday 05 February 2026 01:05:24 +0000 (0:00:03.718) 0:00:23.427 ***** 2026-02-05 01:07:45.693025 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-05 01:07:45.693030 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-05 01:07:45.693037 | orchestrator | 2026-02-05 01:07:45.693043 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-05 01:07:45.693048 | orchestrator | Thursday 05 February 2026 01:05:31 +0000 (0:00:07.498) 0:00:30.926 ***** 2026-02-05 01:07:45.693055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:45.693079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:45.693086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:45.693092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.693103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.693108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.693115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.693128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.693134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.693140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.693150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.693156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.693162 | orchestrator | 2026-02-05 01:07:45.693167 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-05 01:07:45.693173 | orchestrator | Thursday 05 February 2026 01:05:34 +0000 (0:00:02.712) 0:00:33.639 ***** 2026-02-05 01:07:45.693178 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:45.693184 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:45.693189 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:45.693194 | orchestrator | 2026-02-05 01:07:45.693200 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-05 01:07:45.693205 | orchestrator | Thursday 05 February 2026 01:05:34 +0000 (0:00:00.280) 0:00:33.919 ***** 2026-02-05 01:07:45.693211 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:07:45.693216 | orchestrator | 2026-02-05 01:07:45.693221 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-05 01:07:45.693227 | orchestrator | Thursday 05 February 2026 01:05:35 +0000 (0:00:00.641) 0:00:34.561 ***** 2026-02-05 01:07:45.693236 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-05 01:07:45.693242 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-05 01:07:45.693247 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-05 01:07:45.693253 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-05 01:07:45.693258 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-05 01:07:45.693263 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-05 01:07:45.693269 | orchestrator | 2026-02-05 01:07:45.693276 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-05 01:07:45.693282 | orchestrator | Thursday 05 February 2026 01:05:37 +0000 (0:00:02.639) 0:00:37.201 ***** 2026-02-05 01:07:45.693288 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-05 01:07:45.693299 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-05 01:07:45.693305 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-05 01:07:45.693317 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-05 01:07:45.693339 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-05 01:07:45.693346 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-05 01:07:45.693357 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-05 01:07:45.693363 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-05 01:07:45.693369 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-05 01:07:45.693383 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-05 01:07:45.693392 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-05 01:07:45.693401 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-05 01:07:45.693407 | orchestrator | 2026-02-05 01:07:45.693413 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-05 01:07:45.693418 | orchestrator | Thursday 05 February 2026 01:05:43 +0000 (0:00:05.700) 0:00:42.901 ***** 2026-02-05 01:07:45.693424 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 01:07:45.693430 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 01:07:45.693435 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-05 01:07:45.693441 | orchestrator | 2026-02-05 01:07:45.693447 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-05 01:07:45.693452 | orchestrator | Thursday 05 February 2026 01:05:45 +0000 (0:00:02.217) 0:00:45.119 ***** 2026-02-05 01:07:45.693458 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-05 01:07:45.693463 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-05 01:07:45.693468 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-05 01:07:45.693474 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 01:07:45.693480 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 01:07:45.693485 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-05 01:07:45.693491 | orchestrator | 2026-02-05 01:07:45.693496 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-05 01:07:45.693502 | orchestrator | Thursday 05 February 2026 01:05:48 +0000 (0:00:02.931) 0:00:48.050 ***** 2026-02-05 01:07:45.693507 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-05 01:07:45.693513 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-05 01:07:45.693519 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-05 01:07:45.693525 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-05 01:07:45.693531 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-05 01:07:45.693536 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-05 01:07:45.693542 | orchestrator | 2026-02-05 01:07:45.693547 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-05 01:07:45.693553 | orchestrator | Thursday 05 February 2026 01:05:49 +0000 (0:00:00.840) 0:00:48.891 ***** 2026-02-05 01:07:45.693559 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:45.693564 | orchestrator | 2026-02-05 01:07:45.693570 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-05 01:07:45.693576 | orchestrator | Thursday 05 February 2026 01:05:49 +0000 (0:00:00.108) 0:00:49.000 ***** 2026-02-05 01:07:45.693584 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:45.693590 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:45.693595 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:45.693601 | orchestrator | 2026-02-05 01:07:45.693606 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-05 01:07:45.693612 | orchestrator | Thursday 05 February 2026 01:05:50 +0000 (0:00:00.308) 0:00:49.309 ***** 2026-02-05 01:07:45.693617 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:07:45.693623 | orchestrator | 2026-02-05 01:07:45.693629 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-05 01:07:45.693637 | orchestrator | Thursday 05 February 2026 01:05:50 +0000 (0:00:00.771) 0:00:50.081 ***** 2026-02-05 01:07:45.693646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:45.693652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:45.693657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:45.693663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.693676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.693686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.693694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.693701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.693707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.693713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.693723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.693735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.693741 | orchestrator | 2026-02-05 01:07:45.693746 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-05 01:07:45.693752 | orchestrator | Thursday 05 February 2026 01:05:54 +0000 (0:00:03.816) 0:00:53.897 ***** 2026-02-05 01:07:45.693757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 01:07:45.693792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.693798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.693808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.693814 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:45.693825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 01:07:45.693831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.693837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.693842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.693848 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:45.693853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 01:07:45.693862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.693873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.693880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.693885 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:45.693890 | orchestrator | 2026-02-05 01:07:45.693896 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-05 01:07:45.693901 | orchestrator | Thursday 05 February 2026 01:05:55 +0000 (0:00:01.150) 0:00:55.048 ***** 2026-02-05 01:07:45.693907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 01:07:45.693916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.693921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.693929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.693935 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:45.693942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 01:07:45.693947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.693952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.693961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.693966 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:45.693971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 01:07:45.693981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.693987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.693993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.694001 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:45.694006 | orchestrator | 2026-02-05 01:07:45.694012 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-05 01:07:45.694047 | orchestrator | Thursday 05 February 2026 01:05:57 +0000 (0:00:01.447) 0:00:56.495 ***** 2026-02-05 01:07:45.694053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:45.694058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:45.694070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:45.694076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694143 | orchestrator | 2026-02-05 01:07:45.694148 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-05 01:07:45.694154 | orchestrator | Thursday 05 February 2026 01:06:02 +0000 (0:00:05.147) 0:01:01.643 ***** 2026-02-05 01:07:45.694159 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-05 01:07:45.694165 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-05 01:07:45.694170 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-05 01:07:45.694176 | orchestrator | 2026-02-05 01:07:45.694181 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-05 01:07:45.694186 | orchestrator | Thursday 05 February 2026 01:06:04 +0000 (0:00:01.848) 0:01:03.492 ***** 2026-02-05 01:07:45.694194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:45.694203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:45.694209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:45.694218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694277 | orchestrator | 2026-02-05 01:07:45.694283 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-05 01:07:45.694288 | orchestrator | Thursday 05 February 2026 01:06:15 +0000 (0:00:11.777) 0:01:15.269 ***** 2026-02-05 01:07:45.694294 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:45.694299 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:07:45.694304 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:07:45.694309 | orchestrator | 2026-02-05 01:07:45.694315 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-05 01:07:45.694322 | orchestrator | Thursday 05 February 2026 01:06:17 +0000 (0:00:01.869) 0:01:17.139 ***** 2026-02-05 01:07:45.694377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 01:07:45.694384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.694387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.694392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.694397 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:45.694402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 01:07:45.694412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.694420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.694429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.694432 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:45.694436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-05 01:07:45.694439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.694442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.694450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-05 01:07:45.694456 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:45.694459 | orchestrator | 2026-02-05 01:07:45.694463 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-05 01:07:45.694468 | orchestrator | Thursday 05 February 2026 01:06:18 +0000 (0:00:00.528) 0:01:17.667 ***** 2026-02-05 01:07:45.694473 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:45.694478 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:45.694484 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:45.694489 | orchestrator | 2026-02-05 01:07:45.694494 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-05 01:07:45.694500 | orchestrator | Thursday 05 February 2026 01:06:18 +0000 (0:00:00.275) 0:01:17.942 ***** 2026-02-05 01:07:45.694505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:45.694511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:45.694517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-05 01:07:45.694525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-05 01:07:45.694591 | orchestrator | 2026-02-05 01:07:45.694596 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-05 01:07:45.694601 | orchestrator | Thursday 05 February 2026 01:06:22 +0000 (0:00:03.452) 0:01:21.395 ***** 2026-02-05 01:07:45.694606 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:45.694611 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:07:45.694614 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:07:45.694617 | orchestrator | 2026-02-05 01:07:45.694620 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-05 01:07:45.694623 | orchestrator | Thursday 05 February 2026 01:06:22 +0000 (0:00:00.477) 0:01:21.873 ***** 2026-02-05 01:07:45.694627 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:45.694630 | orchestrator | 2026-02-05 01:07:45.694633 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-05 01:07:45.694636 | orchestrator | Thursday 05 February 2026 01:06:24 +0000 (0:00:01.990) 0:01:23.864 ***** 2026-02-05 01:07:45.694639 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:45.694642 | orchestrator | 2026-02-05 01:07:45.694646 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-05 01:07:45.694649 | orchestrator | Thursday 05 February 2026 01:06:26 +0000 (0:00:01.923) 0:01:25.787 ***** 2026-02-05 01:07:45.694652 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:45.694657 | orchestrator | 2026-02-05 01:07:45.694662 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-05 01:07:45.694667 | orchestrator | Thursday 05 February 2026 01:06:48 +0000 (0:00:21.552) 0:01:47.340 ***** 2026-02-05 01:07:45.694672 | orchestrator | 2026-02-05 01:07:45.694677 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-05 01:07:45.694683 | orchestrator | Thursday 05 February 2026 01:06:48 +0000 (0:00:00.065) 0:01:47.405 ***** 2026-02-05 01:07:45.694688 | orchestrator | 2026-02-05 01:07:45.694693 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-05 01:07:45.694699 | orchestrator | Thursday 05 February 2026 01:06:48 +0000 (0:00:00.066) 0:01:47.471 ***** 2026-02-05 01:07:45.694707 | orchestrator | 2026-02-05 01:07:45.694713 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-05 01:07:45.694718 | orchestrator | Thursday 05 February 2026 01:06:48 +0000 (0:00:00.072) 0:01:47.543 ***** 2026-02-05 01:07:45.694723 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:45.694728 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:07:45.694733 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:07:45.694738 | orchestrator | 2026-02-05 01:07:45.694743 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-05 01:07:45.694747 | orchestrator | Thursday 05 February 2026 01:07:05 +0000 (0:00:16.920) 0:02:04.464 ***** 2026-02-05 01:07:45.694752 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:45.694757 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:07:45.694761 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:07:45.694766 | orchestrator | 2026-02-05 01:07:45.694771 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-05 01:07:45.694776 | orchestrator | Thursday 05 February 2026 01:07:10 +0000 (0:00:05.281) 0:02:09.745 ***** 2026-02-05 01:07:45.694780 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:45.694785 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:07:45.694790 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:07:45.694795 | orchestrator | 2026-02-05 01:07:45.694800 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-05 01:07:45.694806 | orchestrator | Thursday 05 February 2026 01:07:31 +0000 (0:00:20.812) 0:02:30.557 ***** 2026-02-05 01:07:45.694811 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:07:45.694816 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:07:45.694821 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:07:45.694826 | orchestrator | 2026-02-05 01:07:45.694831 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-05 01:07:45.694840 | orchestrator | Thursday 05 February 2026 01:07:41 +0000 (0:00:10.650) 0:02:41.208 ***** 2026-02-05 01:07:45.694846 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:07:45.694850 | orchestrator | 2026-02-05 01:07:45.694853 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:07:45.694857 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-05 01:07:45.694864 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:07:45.694869 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:07:45.694874 | orchestrator | 2026-02-05 01:07:45.694879 | orchestrator | 2026-02-05 01:07:45.694884 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:07:45.694889 | orchestrator | Thursday 05 February 2026 01:07:42 +0000 (0:00:00.308) 0:02:41.516 ***** 2026-02-05 01:07:45.694894 | orchestrator | =============================================================================== 2026-02-05 01:07:45.694900 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.55s 2026-02-05 01:07:45.694905 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 20.81s 2026-02-05 01:07:45.694910 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 16.92s 2026-02-05 01:07:45.694915 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.78s 2026-02-05 01:07:45.694921 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.65s 2026-02-05 01:07:45.694925 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.50s 2026-02-05 01:07:45.694931 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.41s 2026-02-05 01:07:45.694936 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.70s 2026-02-05 01:07:45.694944 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.28s 2026-02-05 01:07:45.694949 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.15s 2026-02-05 01:07:45.694954 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.10s 2026-02-05 01:07:45.694959 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.82s 2026-02-05 01:07:45.694964 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.72s 2026-02-05 01:07:45.694970 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.60s 2026-02-05 01:07:45.694975 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.45s 2026-02-05 01:07:45.694980 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.06s 2026-02-05 01:07:45.694985 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.93s 2026-02-05 01:07:45.694990 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.71s 2026-02-05 01:07:45.694995 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.64s 2026-02-05 01:07:45.695001 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.22s 2026-02-05 01:07:45.695006 | orchestrator | 2026-02-05 01:07:45 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:45.695012 | orchestrator | 2026-02-05 01:07:45 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:48.739044 | orchestrator | 2026-02-05 01:07:48 | INFO  | Task fa0d2cc5-b623-48df-aeb0-56482851a14f is in state STARTED 2026-02-05 01:07:48.740614 | orchestrator | 2026-02-05 01:07:48 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:48.741860 | orchestrator | 2026-02-05 01:07:48 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:48.741976 | orchestrator | 2026-02-05 01:07:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:51.785241 | orchestrator | 2026-02-05 01:07:51 | INFO  | Task fa0d2cc5-b623-48df-aeb0-56482851a14f is in state STARTED 2026-02-05 01:07:51.786164 | orchestrator | 2026-02-05 01:07:51 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:51.787903 | orchestrator | 2026-02-05 01:07:51 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:51.788317 | orchestrator | 2026-02-05 01:07:51 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:54.828587 | orchestrator | 2026-02-05 01:07:54 | INFO  | Task fa0d2cc5-b623-48df-aeb0-56482851a14f is in state STARTED 2026-02-05 01:07:54.830421 | orchestrator | 2026-02-05 01:07:54 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:54.832257 | orchestrator | 2026-02-05 01:07:54 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:54.832295 | orchestrator | 2026-02-05 01:07:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:07:57.880911 | orchestrator | 2026-02-05 01:07:57 | INFO  | Task fa0d2cc5-b623-48df-aeb0-56482851a14f is in state STARTED 2026-02-05 01:07:57.883831 | orchestrator | 2026-02-05 01:07:57 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:07:57.885763 | orchestrator | 2026-02-05 01:07:57 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:07:57.885830 | orchestrator | 2026-02-05 01:07:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:08:00.924537 | orchestrator | 2026-02-05 01:08:00 | INFO  | Task fa0d2cc5-b623-48df-aeb0-56482851a14f is in state STARTED 2026-02-05 01:08:00.925663 | orchestrator | 2026-02-05 01:08:00 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:08:00.926771 | orchestrator | 2026-02-05 01:08:00 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state STARTED 2026-02-05 01:08:00.926802 | orchestrator | 2026-02-05 01:08:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:04.082055 | orchestrator | 2026-02-05 01:10:04.082161 | orchestrator | 2026-02-05 01:10:04.082169 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:10:04.082174 | orchestrator | 2026-02-05 01:10:04.082179 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:10:04.082183 | orchestrator | Thursday 05 February 2026 01:07:34 +0000 (0:00:00.250) 0:00:00.250 ***** 2026-02-05 01:10:04.082188 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:10:04.082193 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:10:04.082286 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:10:04.082292 | orchestrator | 2026-02-05 01:10:04.082296 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:10:04.082301 | orchestrator | Thursday 05 February 2026 01:07:34 +0000 (0:00:00.300) 0:00:00.550 ***** 2026-02-05 01:10:04.082305 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-05 01:10:04.082310 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-05 01:10:04.082314 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-05 01:10:04.082318 | orchestrator | 2026-02-05 01:10:04.082322 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-05 01:10:04.082326 | orchestrator | 2026-02-05 01:10:04.082330 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-05 01:10:04.082334 | orchestrator | Thursday 05 February 2026 01:07:34 +0000 (0:00:00.421) 0:00:00.972 ***** 2026-02-05 01:10:04.082339 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:10:04.082344 | orchestrator | 2026-02-05 01:10:04.082348 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-05 01:10:04.082352 | orchestrator | Thursday 05 February 2026 01:07:35 +0000 (0:00:00.500) 0:00:01.472 ***** 2026-02-05 01:10:04.082359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:10:04.082366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:10:04.082371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:10:04.082393 | orchestrator | 2026-02-05 01:10:04.082478 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-05 01:10:04.082483 | orchestrator | Thursday 05 February 2026 01:07:36 +0000 (0:00:00.739) 0:00:02.212 ***** 2026-02-05 01:10:04.082488 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-05 01:10:04.082493 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-05 01:10:04.082497 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:10:04.082501 | orchestrator | 2026-02-05 01:10:04.082516 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-05 01:10:04.082520 | orchestrator | Thursday 05 February 2026 01:07:36 +0000 (0:00:00.741) 0:00:02.954 ***** 2026-02-05 01:10:04.082524 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:10:04.082528 | orchestrator | 2026-02-05 01:10:04.082532 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-05 01:10:04.082535 | orchestrator | Thursday 05 February 2026 01:07:37 +0000 (0:00:00.564) 0:00:03.518 ***** 2026-02-05 01:10:04.082553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:10:04.082558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:10:04.082562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:10:04.082566 | orchestrator | 2026-02-05 01:10:04.082570 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-05 01:10:04.082574 | orchestrator | Thursday 05 February 2026 01:07:38 +0000 (0:00:01.269) 0:00:04.788 ***** 2026-02-05 01:10:04.082578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 01:10:04.082605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 01:10:04.082610 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:10:04.082615 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:10:04.082627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 01:10:04.082632 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:10:04.082637 | orchestrator | 2026-02-05 01:10:04.082642 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-05 01:10:04.082646 | orchestrator | Thursday 05 February 2026 01:07:38 +0000 (0:00:00.306) 0:00:05.095 ***** 2026-02-05 01:10:04.082651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 01:10:04.082656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 01:10:04.082662 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:10:04.082668 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:10:04.082675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-05 01:10:04.082685 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:10:04.082692 | orchestrator | 2026-02-05 01:10:04.082707 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-05 01:10:04.082714 | orchestrator | Thursday 05 February 2026 01:07:39 +0000 (0:00:00.772) 0:00:05.867 ***** 2026-02-05 01:10:04.082720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:10:04.082728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:10:04.082738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:10:04.082743 | orchestrator | 2026-02-05 01:10:04.082748 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-05 01:10:04.082755 | orchestrator | Thursday 05 February 2026 01:07:40 +0000 (0:00:01.242) 0:00:07.110 ***** 2026-02-05 01:10:04.082776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:10:04.082789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:10:04.082801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:10:04.082807 | orchestrator | 2026-02-05 01:10:04.082813 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-05 01:10:04.082820 | orchestrator | Thursday 05 February 2026 01:07:42 +0000 (0:00:01.246) 0:00:08.357 ***** 2026-02-05 01:10:04.082826 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:10:04.082833 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:10:04.082840 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:10:04.082846 | orchestrator | 2026-02-05 01:10:04.082853 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-05 01:10:04.082859 | orchestrator | Thursday 05 February 2026 01:07:42 +0000 (0:00:00.485) 0:00:08.842 ***** 2026-02-05 01:10:04.082864 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-05 01:10:04.082871 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-05 01:10:04.082877 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-05 01:10:04.082883 | orchestrator | 2026-02-05 01:10:04.082889 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-05 01:10:04.082895 | orchestrator | Thursday 05 February 2026 01:07:43 +0000 (0:00:01.140) 0:00:09.983 ***** 2026-02-05 01:10:04.082904 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-05 01:10:04.082911 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-05 01:10:04.082917 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-05 01:10:04.082923 | orchestrator | 2026-02-05 01:10:04.082927 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-05 01:10:04.082933 | orchestrator | Thursday 05 February 2026 01:07:45 +0000 (0:00:01.256) 0:00:11.239 ***** 2026-02-05 01:10:04.082944 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:10:04.082959 | orchestrator | 2026-02-05 01:10:04.082965 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-05 01:10:04.082971 | orchestrator | Thursday 05 February 2026 01:07:45 +0000 (0:00:00.795) 0:00:12.035 ***** 2026-02-05 01:10:04.082977 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-05 01:10:04.082983 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-05 01:10:04.082989 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:10:04.082996 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:10:04.083001 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:10:04.083008 | orchestrator | 2026-02-05 01:10:04.083014 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-05 01:10:04.083020 | orchestrator | Thursday 05 February 2026 01:07:46 +0000 (0:00:00.653) 0:00:12.689 ***** 2026-02-05 01:10:04.083026 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:10:04.083046 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:10:04.083052 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:10:04.083057 | orchestrator | 2026-02-05 01:10:04.083076 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-05 01:10:04.083088 | orchestrator | Thursday 05 February 2026 01:07:47 +0000 (0:00:00.482) 0:00:13.171 ***** 2026-02-05 01:10:04.083096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1091938, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6264322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1091938, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6264322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1091938, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6264322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1092004, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6422143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1092004, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6422143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1092004, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6422143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1091954, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.630078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1091954, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.630078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1091954, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.630078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1092010, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6452143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1092010, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6452143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1092010, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6452143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1091975, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.634214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1091975, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.634214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1091975, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.634214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1091992, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.639407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1091992, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.639407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1091992, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.639407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1091936, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6241999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1091936, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6241999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1091936, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6241999, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1091945, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6272137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1091945, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6272137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1091945, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6272137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/2026-02-05 01:10:04 | INFO  | Task fa0d2cc5-b623-48df-aeb0-56482851a14f is in state SUCCESS 2026-02-05 01:10:04.083665 | orchestrator | grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1091959, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.630214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1091959, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.630214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1091959, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.630214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1091980, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6360652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1091980, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6360652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1091980, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6360652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1092002, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.641085, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1092002, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.641085, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1092002, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.641085, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1091948, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.628214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1091948, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.628214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1091948, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.628214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1091990, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.638214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1091990, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.638214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1091990, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.638214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1091977, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.635214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1091977, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.635214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1091977, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.635214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1091971, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.633214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1091971, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.633214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1091971, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.633214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1091968, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6322417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1091968, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6322417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1091968, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6322417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1091985, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6372142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1091985, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6372142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1091985, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6372142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1091962, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.631002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1091962, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.631002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1091962, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.631002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1092000, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6402142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.083998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1092000, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6402142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1092000, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6402142, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1093045, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8792214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1093045, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8792214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1093045, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8792214, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1092046, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.672402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1092046, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.672402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1092046, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.672402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1092029, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6502144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1092029, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6502144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1092029, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6502144, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1092060, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6752152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1092060, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6752152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1092060, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6752152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1092019, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6482143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1092019, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6482143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1092019, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6482143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1093011, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8721702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1093011, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8721702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1093011, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8721702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1092063, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8685284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1092063, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8685284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1092063, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8685284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1093018, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8722212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1093018, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8722212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1093018, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8722212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1093039, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8778305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1093039, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8778305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1093039, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8778305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1093008, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8708973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1093008, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8708973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1093008, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8708973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1092057, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6735861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1092057, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6735861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1092057, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6735861, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1092045, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.660215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1092045, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.660215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1092045, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.660215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1092053, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6733434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1092053, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6733434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1092053, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6733434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1092030, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6562173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1092030, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6562173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1092030, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6562173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1092058, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6742535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1092058, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6742535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1092058, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6742535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1093028, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8772469, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1093028, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8772469, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1093028, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8772469, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1093023, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8742213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1093023, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8742213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1093023, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8742213, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1092021, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.649081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1092021, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.649081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1092021, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.649081, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1092025, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6501548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1092025, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6501548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1092025, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.6501548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1093005, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8693511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1093005, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8693511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1093005, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8693511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1093021, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8730586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1093021, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8730586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1093021, 'dev': 81, 'nlink': 1, 'atime': 1770249744.0, 'mtime': 1770249744.0, 'ctime': 1770250713.8730586, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-05 01:10:04.084761 | orchestrator | 2026-02-05 01:10:04.084768 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-05 01:10:04.084774 | orchestrator | Thursday 05 February 2026 01:08:21 +0000 (0:00:34.125) 0:00:47.297 ***** 2026-02-05 01:10:04.084780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:10:04.084787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:10:04.084794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-05 01:10:04.084800 | orchestrator | 2026-02-05 01:10:04.084806 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-05 01:10:04.084813 | orchestrator | Thursday 05 February 2026 01:08:22 +0000 (0:00:01.050) 0:00:48.348 ***** 2026-02-05 01:10:04.084824 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:10:04.084832 | orchestrator | 2026-02-05 01:10:04.084838 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-05 01:10:04.084844 | orchestrator | Thursday 05 February 2026 01:08:24 +0000 (0:00:02.560) 0:00:50.908 ***** 2026-02-05 01:10:04.084850 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:10:04.084856 | orchestrator | 2026-02-05 01:10:04.084863 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-05 01:10:04.084869 | orchestrator | Thursday 05 February 2026 01:08:26 +0000 (0:00:02.047) 0:00:52.956 ***** 2026-02-05 01:10:04.084876 | orchestrator | 2026-02-05 01:10:04.084882 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-05 01:10:04.084889 | orchestrator | Thursday 05 February 2026 01:08:26 +0000 (0:00:00.063) 0:00:53.020 ***** 2026-02-05 01:10:04.084895 | orchestrator | 2026-02-05 01:10:04.084901 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-05 01:10:04.084908 | orchestrator | Thursday 05 February 2026 01:08:26 +0000 (0:00:00.062) 0:00:53.082 ***** 2026-02-05 01:10:04.084914 | orchestrator | 2026-02-05 01:10:04.084920 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-05 01:10:04.084926 | orchestrator | Thursday 05 February 2026 01:08:27 +0000 (0:00:00.167) 0:00:53.249 ***** 2026-02-05 01:10:04.084938 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:10:04.084945 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:10:04.084951 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:10:04.084956 | orchestrator | 2026-02-05 01:10:04.084963 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-05 01:10:04.084969 | orchestrator | Thursday 05 February 2026 01:08:28 +0000 (0:00:01.771) 0:00:55.021 ***** 2026-02-05 01:10:04.084975 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:10:04.084981 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:10:04.084987 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-05 01:10:04.084995 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-05 01:10:04.085002 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:10:04.085008 | orchestrator | 2026-02-05 01:10:04.085015 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-05 01:10:04.085031 | orchestrator | Thursday 05 February 2026 01:08:55 +0000 (0:00:26.392) 0:01:21.414 ***** 2026-02-05 01:10:04.085038 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:10:04.085045 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:10:04.085051 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:10:04.085058 | orchestrator | 2026-02-05 01:10:04.085064 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-05 01:10:04.085070 | orchestrator | Thursday 05 February 2026 01:09:20 +0000 (0:00:25.015) 0:01:46.429 ***** 2026-02-05 01:10:04.085077 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:10:04.085084 | orchestrator | 2026-02-05 01:10:04.085090 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-05 01:10:04.085097 | orchestrator | Thursday 05 February 2026 01:09:22 +0000 (0:00:02.044) 0:01:48.473 ***** 2026-02-05 01:10:04.085103 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:10:04.085110 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:10:04.085116 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:10:04.085123 | orchestrator | 2026-02-05 01:10:04.085129 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-05 01:10:04.085136 | orchestrator | Thursday 05 February 2026 01:09:22 +0000 (0:00:00.451) 0:01:48.925 ***** 2026-02-05 01:10:04.085143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-05 01:10:04.085151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-05 01:10:04.085159 | orchestrator | 2026-02-05 01:10:04.085166 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-05 01:10:04.085173 | orchestrator | Thursday 05 February 2026 01:09:25 +0000 (0:00:02.269) 0:01:51.195 ***** 2026-02-05 01:10:04.085180 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:10:04.085186 | orchestrator | 2026-02-05 01:10:04.085192 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:10:04.085224 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:10:04.085232 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:10:04.085238 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:10:04.085249 | orchestrator | 2026-02-05 01:10:04.085255 | orchestrator | 2026-02-05 01:10:04.085261 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:10:04.085267 | orchestrator | Thursday 05 February 2026 01:09:25 +0000 (0:00:00.254) 0:01:51.449 ***** 2026-02-05 01:10:04.085273 | orchestrator | =============================================================================== 2026-02-05 01:10:04.085283 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 34.13s 2026-02-05 01:10:04.085289 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.39s 2026-02-05 01:10:04.085295 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 25.02s 2026-02-05 01:10:04.085302 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.56s 2026-02-05 01:10:04.085308 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.27s 2026-02-05 01:10:04.085314 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.05s 2026-02-05 01:10:04.085321 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.04s 2026-02-05 01:10:04.085327 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.77s 2026-02-05 01:10:04.085333 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.27s 2026-02-05 01:10:04.085339 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.26s 2026-02-05 01:10:04.085346 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.25s 2026-02-05 01:10:04.085352 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.24s 2026-02-05 01:10:04.085358 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.14s 2026-02-05 01:10:04.085364 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.05s 2026-02-05 01:10:04.085370 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.80s 2026-02-05 01:10:04.085377 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.77s 2026-02-05 01:10:04.085384 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.74s 2026-02-05 01:10:04.085391 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.74s 2026-02-05 01:10:04.085397 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.65s 2026-02-05 01:10:04.085404 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.56s 2026-02-05 01:10:04.085414 | orchestrator | 2026-02-05 01:10:04 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:04.085422 | orchestrator | 2026-02-05 01:10:04 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:04.085428 | orchestrator | 2026-02-05 01:10:04 | INFO  | Task 31d12aee-1ea5-4858-b954-38e8c1a16833 is in state SUCCESS 2026-02-05 01:10:04.085435 | orchestrator | 2026-02-05 01:10:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:07.113694 | orchestrator | 2026-02-05 01:10:07 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:07.117340 | orchestrator | 2026-02-05 01:10:07 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:07.117427 | orchestrator | 2026-02-05 01:10:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:10.152350 | orchestrator | 2026-02-05 01:10:10 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:10.154162 | orchestrator | 2026-02-05 01:10:10 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:10.154236 | orchestrator | 2026-02-05 01:10:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:13.186729 | orchestrator | 2026-02-05 01:10:13 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:13.188077 | orchestrator | 2026-02-05 01:10:13 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:13.189864 | orchestrator | 2026-02-05 01:10:13 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:16.228274 | orchestrator | 2026-02-05 01:10:16 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:16.232038 | orchestrator | 2026-02-05 01:10:16 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:16.232085 | orchestrator | 2026-02-05 01:10:16 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:19.274349 | orchestrator | 2026-02-05 01:10:19 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:19.274790 | orchestrator | 2026-02-05 01:10:19 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:19.274819 | orchestrator | 2026-02-05 01:10:19 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:22.309916 | orchestrator | 2026-02-05 01:10:22 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:22.312015 | orchestrator | 2026-02-05 01:10:22 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:22.312515 | orchestrator | 2026-02-05 01:10:22 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:25.342895 | orchestrator | 2026-02-05 01:10:25 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:25.343295 | orchestrator | 2026-02-05 01:10:25 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:25.343311 | orchestrator | 2026-02-05 01:10:25 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:28.393399 | orchestrator | 2026-02-05 01:10:28 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:28.394927 | orchestrator | 2026-02-05 01:10:28 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:28.395010 | orchestrator | 2026-02-05 01:10:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:31.426409 | orchestrator | 2026-02-05 01:10:31 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:31.426782 | orchestrator | 2026-02-05 01:10:31 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:31.426808 | orchestrator | 2026-02-05 01:10:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:34.471822 | orchestrator | 2026-02-05 01:10:34 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:34.473515 | orchestrator | 2026-02-05 01:10:34 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:34.473579 | orchestrator | 2026-02-05 01:10:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:37.516857 | orchestrator | 2026-02-05 01:10:37 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:37.519078 | orchestrator | 2026-02-05 01:10:37 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:37.519164 | orchestrator | 2026-02-05 01:10:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:40.563887 | orchestrator | 2026-02-05 01:10:40 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:40.564511 | orchestrator | 2026-02-05 01:10:40 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:40.564539 | orchestrator | 2026-02-05 01:10:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:43.602324 | orchestrator | 2026-02-05 01:10:43 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:43.604930 | orchestrator | 2026-02-05 01:10:43 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:43.604993 | orchestrator | 2026-02-05 01:10:43 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:46.636630 | orchestrator | 2026-02-05 01:10:46 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:46.637327 | orchestrator | 2026-02-05 01:10:46 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:46.637356 | orchestrator | 2026-02-05 01:10:46 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:49.686311 | orchestrator | 2026-02-05 01:10:49 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:49.688309 | orchestrator | 2026-02-05 01:10:49 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:49.688354 | orchestrator | 2026-02-05 01:10:49 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:52.730260 | orchestrator | 2026-02-05 01:10:52 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:52.732229 | orchestrator | 2026-02-05 01:10:52 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:52.732280 | orchestrator | 2026-02-05 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:55.774059 | orchestrator | 2026-02-05 01:10:55 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:55.776066 | orchestrator | 2026-02-05 01:10:55 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:55.776106 | orchestrator | 2026-02-05 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:10:58.799498 | orchestrator | 2026-02-05 01:10:58 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:10:58.800042 | orchestrator | 2026-02-05 01:10:58 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:10:58.800061 | orchestrator | 2026-02-05 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:01.844209 | orchestrator | 2026-02-05 01:11:01 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:01.848256 | orchestrator | 2026-02-05 01:11:01 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:01.848303 | orchestrator | 2026-02-05 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:04.897208 | orchestrator | 2026-02-05 01:11:04 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:04.898389 | orchestrator | 2026-02-05 01:11:04 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:04.898454 | orchestrator | 2026-02-05 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:07.938231 | orchestrator | 2026-02-05 01:11:07 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:07.939665 | orchestrator | 2026-02-05 01:11:07 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:07.939699 | orchestrator | 2026-02-05 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:10.979369 | orchestrator | 2026-02-05 01:11:10 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:10.981677 | orchestrator | 2026-02-05 01:11:10 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:10.981736 | orchestrator | 2026-02-05 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:14.029316 | orchestrator | 2026-02-05 01:11:14 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:14.031020 | orchestrator | 2026-02-05 01:11:14 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:14.031089 | orchestrator | 2026-02-05 01:11:14 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:17.071068 | orchestrator | 2026-02-05 01:11:17 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:17.071768 | orchestrator | 2026-02-05 01:11:17 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:17.071793 | orchestrator | 2026-02-05 01:11:17 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:20.120528 | orchestrator | 2026-02-05 01:11:20 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:20.122993 | orchestrator | 2026-02-05 01:11:20 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:20.123093 | orchestrator | 2026-02-05 01:11:20 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:23.166237 | orchestrator | 2026-02-05 01:11:23 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:23.167984 | orchestrator | 2026-02-05 01:11:23 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:23.168053 | orchestrator | 2026-02-05 01:11:23 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:26.200502 | orchestrator | 2026-02-05 01:11:26 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:26.200988 | orchestrator | 2026-02-05 01:11:26 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:26.201014 | orchestrator | 2026-02-05 01:11:26 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:29.230853 | orchestrator | 2026-02-05 01:11:29 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:29.231282 | orchestrator | 2026-02-05 01:11:29 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:29.231397 | orchestrator | 2026-02-05 01:11:29 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:32.263159 | orchestrator | 2026-02-05 01:11:32 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:32.265185 | orchestrator | 2026-02-05 01:11:32 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:32.265225 | orchestrator | 2026-02-05 01:11:32 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:35.304464 | orchestrator | 2026-02-05 01:11:35 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:35.307056 | orchestrator | 2026-02-05 01:11:35 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:35.307104 | orchestrator | 2026-02-05 01:11:35 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:38.364436 | orchestrator | 2026-02-05 01:11:38 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:38.366776 | orchestrator | 2026-02-05 01:11:38 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:38.366815 | orchestrator | 2026-02-05 01:11:38 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:41.416011 | orchestrator | 2026-02-05 01:11:41 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:41.418556 | orchestrator | 2026-02-05 01:11:41 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:41.418622 | orchestrator | 2026-02-05 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:44.464794 | orchestrator | 2026-02-05 01:11:44 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:44.466185 | orchestrator | 2026-02-05 01:11:44 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:44.466226 | orchestrator | 2026-02-05 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:47.508960 | orchestrator | 2026-02-05 01:11:47 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:47.511264 | orchestrator | 2026-02-05 01:11:47 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:47.511300 | orchestrator | 2026-02-05 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:50.557406 | orchestrator | 2026-02-05 01:11:50 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:50.559877 | orchestrator | 2026-02-05 01:11:50 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:50.559923 | orchestrator | 2026-02-05 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:53.597705 | orchestrator | 2026-02-05 01:11:53 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:53.599805 | orchestrator | 2026-02-05 01:11:53 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:53.599858 | orchestrator | 2026-02-05 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:56.644331 | orchestrator | 2026-02-05 01:11:56 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:56.646404 | orchestrator | 2026-02-05 01:11:56 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:56.646476 | orchestrator | 2026-02-05 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:11:59.693159 | orchestrator | 2026-02-05 01:11:59 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:11:59.695461 | orchestrator | 2026-02-05 01:11:59 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:11:59.695517 | orchestrator | 2026-02-05 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:02.730588 | orchestrator | 2026-02-05 01:12:02 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:02.731249 | orchestrator | 2026-02-05 01:12:02 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:02.731279 | orchestrator | 2026-02-05 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:05.765363 | orchestrator | 2026-02-05 01:12:05 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:05.765953 | orchestrator | 2026-02-05 01:12:05 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:05.765981 | orchestrator | 2026-02-05 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:08.818430 | orchestrator | 2026-02-05 01:12:08 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:08.826671 | orchestrator | 2026-02-05 01:12:08 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:08.827843 | orchestrator | 2026-02-05 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:11.859499 | orchestrator | 2026-02-05 01:12:11 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:11.860890 | orchestrator | 2026-02-05 01:12:11 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:11.860931 | orchestrator | 2026-02-05 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:14.910089 | orchestrator | 2026-02-05 01:12:14 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:14.912458 | orchestrator | 2026-02-05 01:12:14 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:14.912515 | orchestrator | 2026-02-05 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:17.962178 | orchestrator | 2026-02-05 01:12:17 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:17.964333 | orchestrator | 2026-02-05 01:12:17 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:17.964373 | orchestrator | 2026-02-05 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:21.012258 | orchestrator | 2026-02-05 01:12:21 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:21.014083 | orchestrator | 2026-02-05 01:12:21 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:21.014127 | orchestrator | 2026-02-05 01:12:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:24.049927 | orchestrator | 2026-02-05 01:12:24 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:24.050522 | orchestrator | 2026-02-05 01:12:24 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:24.050559 | orchestrator | 2026-02-05 01:12:24 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:27.147234 | orchestrator | 2026-02-05 01:12:27 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:27.148107 | orchestrator | 2026-02-05 01:12:27 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:27.148166 | orchestrator | 2026-02-05 01:12:27 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:30.190432 | orchestrator | 2026-02-05 01:12:30 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:30.193618 | orchestrator | 2026-02-05 01:12:30 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:30.193709 | orchestrator | 2026-02-05 01:12:30 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:33.232947 | orchestrator | 2026-02-05 01:12:33 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:33.233739 | orchestrator | 2026-02-05 01:12:33 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:33.233776 | orchestrator | 2026-02-05 01:12:33 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:36.272297 | orchestrator | 2026-02-05 01:12:36 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:36.273409 | orchestrator | 2026-02-05 01:12:36 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:36.273462 | orchestrator | 2026-02-05 01:12:36 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:39.313147 | orchestrator | 2026-02-05 01:12:39 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:39.316976 | orchestrator | 2026-02-05 01:12:39 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:39.317056 | orchestrator | 2026-02-05 01:12:39 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:42.356650 | orchestrator | 2026-02-05 01:12:42 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:42.359011 | orchestrator | 2026-02-05 01:12:42 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:42.359323 | orchestrator | 2026-02-05 01:12:42 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:45.402004 | orchestrator | 2026-02-05 01:12:45 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:45.404168 | orchestrator | 2026-02-05 01:12:45 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:45.405752 | orchestrator | 2026-02-05 01:12:45 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:48.448783 | orchestrator | 2026-02-05 01:12:48 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:48.449771 | orchestrator | 2026-02-05 01:12:48 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:48.449819 | orchestrator | 2026-02-05 01:12:48 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:51.546401 | orchestrator | 2026-02-05 01:12:51 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:51.546500 | orchestrator | 2026-02-05 01:12:51 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:51.546545 | orchestrator | 2026-02-05 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:54.547095 | orchestrator | 2026-02-05 01:12:54 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:54.547190 | orchestrator | 2026-02-05 01:12:54 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:54.547204 | orchestrator | 2026-02-05 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:12:57.574274 | orchestrator | 2026-02-05 01:12:57 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:12:57.576746 | orchestrator | 2026-02-05 01:12:57 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:12:57.576807 | orchestrator | 2026-02-05 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:00.616927 | orchestrator | 2026-02-05 01:13:00 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:13:00.619583 | orchestrator | 2026-02-05 01:13:00 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:00.620012 | orchestrator | 2026-02-05 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:03.658228 | orchestrator | 2026-02-05 01:13:03 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:13:03.660208 | orchestrator | 2026-02-05 01:13:03 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:03.660556 | orchestrator | 2026-02-05 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:06.708209 | orchestrator | 2026-02-05 01:13:06 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:13:06.709546 | orchestrator | 2026-02-05 01:13:06 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:06.709580 | orchestrator | 2026-02-05 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:09.751278 | orchestrator | 2026-02-05 01:13:09 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:13:09.753780 | orchestrator | 2026-02-05 01:13:09 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:09.753880 | orchestrator | 2026-02-05 01:13:09 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:12.795291 | orchestrator | 2026-02-05 01:13:12 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:13:12.796762 | orchestrator | 2026-02-05 01:13:12 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:12.796810 | orchestrator | 2026-02-05 01:13:12 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:15.841351 | orchestrator | 2026-02-05 01:13:15 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:13:15.842900 | orchestrator | 2026-02-05 01:13:15 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:15.842946 | orchestrator | 2026-02-05 01:13:15 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:18.888426 | orchestrator | 2026-02-05 01:13:18 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:13:18.891593 | orchestrator | 2026-02-05 01:13:18 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:18.891662 | orchestrator | 2026-02-05 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:21.933051 | orchestrator | 2026-02-05 01:13:21 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:13:21.933634 | orchestrator | 2026-02-05 01:13:21 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:21.933671 | orchestrator | 2026-02-05 01:13:21 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:24.980844 | orchestrator | 2026-02-05 01:13:24 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:13:24.983824 | orchestrator | 2026-02-05 01:13:24 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:24.983871 | orchestrator | 2026-02-05 01:13:24 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:28.029475 | orchestrator | 2026-02-05 01:13:28 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:13:28.031081 | orchestrator | 2026-02-05 01:13:28 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:28.031124 | orchestrator | 2026-02-05 01:13:28 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:31.079872 | orchestrator | 2026-02-05 01:13:31 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state STARTED 2026-02-05 01:13:31.082244 | orchestrator | 2026-02-05 01:13:31 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:31.082290 | orchestrator | 2026-02-05 01:13:31 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:34.121711 | orchestrator | 2026-02-05 01:13:34 | INFO  | Task c029b217-08fe-4ea9-8617-339f4ab7ef59 is in state SUCCESS 2026-02-05 01:13:34.122709 | orchestrator | 2026-02-05 01:13:34.122744 | orchestrator | 2026-02-05 01:13:34.122754 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:13:34.122763 | orchestrator | 2026-02-05 01:13:34.122768 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:13:34.122772 | orchestrator | Thursday 05 February 2026 01:06:28 +0000 (0:00:00.135) 0:00:00.135 ***** 2026-02-05 01:13:34.122776 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:34.122781 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:13:34.122785 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:13:34.122789 | orchestrator | 2026-02-05 01:13:34.122793 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:13:34.122817 | orchestrator | Thursday 05 February 2026 01:06:28 +0000 (0:00:00.224) 0:00:00.359 ***** 2026-02-05 01:13:34.122824 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-02-05 01:13:34.122831 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-02-05 01:13:34.122837 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-02-05 01:13:34.122844 | orchestrator | 2026-02-05 01:13:34.122850 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-02-05 01:13:34.122879 | orchestrator | 2026-02-05 01:13:34.122883 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-02-05 01:13:34.122887 | orchestrator | Thursday 05 February 2026 01:06:29 +0000 (0:00:00.484) 0:00:00.844 ***** 2026-02-05 01:13:34.122891 | orchestrator | 2026-02-05 01:13:34.122895 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-02-05 01:13:34.122898 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:34.122902 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:13:34.122906 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:13:34.122910 | orchestrator | 2026-02-05 01:13:34.122914 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:13:34.122918 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:13:34.122923 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:13:34.122927 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:13:34.122930 | orchestrator | 2026-02-05 01:13:34.122934 | orchestrator | 2026-02-05 01:13:34.122938 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:13:34.122942 | orchestrator | Thursday 05 February 2026 01:09:23 +0000 (0:02:54.734) 0:02:55.578 ***** 2026-02-05 01:13:34.122945 | orchestrator | =============================================================================== 2026-02-05 01:13:34.122949 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 174.73s 2026-02-05 01:13:34.122953 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2026-02-05 01:13:34.122957 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.22s 2026-02-05 01:13:34.122960 | orchestrator | 2026-02-05 01:13:34.122964 | orchestrator | 2026-02-05 01:13:34.122980 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:13:34.122996 | orchestrator | 2026-02-05 01:13:34.123000 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-05 01:13:34.123004 | orchestrator | Thursday 05 February 2026 01:05:20 +0000 (0:00:00.314) 0:00:00.314 ***** 2026-02-05 01:13:34.123008 | orchestrator | changed: [testbed-manager] 2026-02-05 01:13:34.123012 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.123016 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:13:34.123024 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:13:34.123028 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:34.123031 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:34.123035 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:34.123039 | orchestrator | 2026-02-05 01:13:34.123043 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:13:34.123046 | orchestrator | Thursday 05 February 2026 01:05:21 +0000 (0:00:00.958) 0:00:01.273 ***** 2026-02-05 01:13:34.123050 | orchestrator | changed: [testbed-manager] 2026-02-05 01:13:34.123054 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.123057 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:13:34.123061 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:13:34.123065 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:34.123068 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:34.123072 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:34.123080 | orchestrator | 2026-02-05 01:13:34.123084 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:13:34.123088 | orchestrator | Thursday 05 February 2026 01:05:22 +0000 (0:00:00.645) 0:00:01.918 ***** 2026-02-05 01:13:34.123091 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-05 01:13:34.123102 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-05 01:13:34.123106 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-05 01:13:34.123109 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-05 01:13:34.123113 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-05 01:13:34.123117 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-05 01:13:34.123120 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-05 01:13:34.123124 | orchestrator | 2026-02-05 01:13:34.123145 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-05 01:13:34.123166 | orchestrator | 2026-02-05 01:13:34.123173 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-05 01:13:34.123180 | orchestrator | Thursday 05 February 2026 01:05:23 +0000 (0:00:00.910) 0:00:02.828 ***** 2026-02-05 01:13:34.123186 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:13:34.123193 | orchestrator | 2026-02-05 01:13:34.123199 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-05 01:13:34.123215 | orchestrator | Thursday 05 February 2026 01:05:23 +0000 (0:00:00.721) 0:00:03.550 ***** 2026-02-05 01:13:34.123219 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-05 01:13:34.123223 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-05 01:13:34.123257 | orchestrator | 2026-02-05 01:13:34.123261 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-05 01:13:34.123265 | orchestrator | Thursday 05 February 2026 01:05:28 +0000 (0:00:04.798) 0:00:08.348 ***** 2026-02-05 01:13:34.123269 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 01:13:34.123273 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-05 01:13:34.123277 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.123283 | orchestrator | 2026-02-05 01:13:34.123289 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-05 01:13:34.123294 | orchestrator | Thursday 05 February 2026 01:05:32 +0000 (0:00:03.954) 0:00:12.303 ***** 2026-02-05 01:13:34.123300 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.123307 | orchestrator | 2026-02-05 01:13:34.123367 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-05 01:13:34.123376 | orchestrator | Thursday 05 February 2026 01:05:33 +0000 (0:00:00.785) 0:00:13.088 ***** 2026-02-05 01:13:34.123381 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.123385 | orchestrator | 2026-02-05 01:13:34.123390 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-05 01:13:34.123395 | orchestrator | Thursday 05 February 2026 01:05:35 +0000 (0:00:01.543) 0:00:14.632 ***** 2026-02-05 01:13:34.123399 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.123403 | orchestrator | 2026-02-05 01:13:34.123407 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-05 01:13:34.123412 | orchestrator | Thursday 05 February 2026 01:05:38 +0000 (0:00:03.901) 0:00:18.533 ***** 2026-02-05 01:13:34.123416 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.123421 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.123425 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.123429 | orchestrator | 2026-02-05 01:13:34.123434 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-05 01:13:34.123450 | orchestrator | Thursday 05 February 2026 01:05:39 +0000 (0:00:00.865) 0:00:19.399 ***** 2026-02-05 01:13:34.123455 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:34.123462 | orchestrator | 2026-02-05 01:13:34.123468 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-05 01:13:34.123480 | orchestrator | Thursday 05 February 2026 01:06:14 +0000 (0:00:34.955) 0:00:54.354 ***** 2026-02-05 01:13:34.123487 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.123493 | orchestrator | 2026-02-05 01:13:34.123500 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-05 01:13:34.123506 | orchestrator | Thursday 05 February 2026 01:06:29 +0000 (0:00:15.050) 0:01:09.404 ***** 2026-02-05 01:13:34.123513 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:34.123520 | orchestrator | 2026-02-05 01:13:34.123526 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-05 01:13:34.123533 | orchestrator | Thursday 05 February 2026 01:06:43 +0000 (0:00:13.914) 0:01:23.319 ***** 2026-02-05 01:13:34.123537 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:34.123540 | orchestrator | 2026-02-05 01:13:34.123544 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-05 01:13:34.123548 | orchestrator | Thursday 05 February 2026 01:06:44 +0000 (0:00:01.030) 0:01:24.349 ***** 2026-02-05 01:13:34.123552 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.123555 | orchestrator | 2026-02-05 01:13:34.123559 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-05 01:13:34.123563 | orchestrator | Thursday 05 February 2026 01:06:45 +0000 (0:00:00.444) 0:01:24.794 ***** 2026-02-05 01:13:34.123567 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:13:34.123571 | orchestrator | 2026-02-05 01:13:34.123574 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-05 01:13:34.123578 | orchestrator | Thursday 05 February 2026 01:06:45 +0000 (0:00:00.529) 0:01:25.323 ***** 2026-02-05 01:13:34.123582 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:34.123585 | orchestrator | 2026-02-05 01:13:34.123589 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-05 01:13:34.123593 | orchestrator | Thursday 05 February 2026 01:07:03 +0000 (0:00:17.824) 0:01:43.147 ***** 2026-02-05 01:13:34.123597 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.123601 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.123604 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.123608 | orchestrator | 2026-02-05 01:13:34.123612 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-05 01:13:34.123615 | orchestrator | 2026-02-05 01:13:34.123619 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-05 01:13:34.123623 | orchestrator | Thursday 05 February 2026 01:07:03 +0000 (0:00:00.367) 0:01:43.515 ***** 2026-02-05 01:13:34.123626 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:13:34.123630 | orchestrator | 2026-02-05 01:13:34.123634 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-05 01:13:34.123638 | orchestrator | Thursday 05 February 2026 01:07:04 +0000 (0:00:00.609) 0:01:44.124 ***** 2026-02-05 01:13:34.123641 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.123645 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.123649 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.123652 | orchestrator | 2026-02-05 01:13:34.123660 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-05 01:13:34.123664 | orchestrator | Thursday 05 February 2026 01:07:06 +0000 (0:00:01.866) 0:01:45.991 ***** 2026-02-05 01:13:34.123667 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.123671 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.123675 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.123678 | orchestrator | 2026-02-05 01:13:34.123682 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-05 01:13:34.123692 | orchestrator | Thursday 05 February 2026 01:07:08 +0000 (0:00:01.869) 0:01:47.861 ***** 2026-02-05 01:13:34.123696 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.123700 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.123708 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.123712 | orchestrator | 2026-02-05 01:13:34.123715 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-05 01:13:34.123719 | orchestrator | Thursday 05 February 2026 01:07:08 +0000 (0:00:00.337) 0:01:48.198 ***** 2026-02-05 01:13:34.123723 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-05 01:13:34.123727 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.123732 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-05 01:13:34.123738 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.123744 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-05 01:13:34.123749 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-05 01:13:34.123755 | orchestrator | 2026-02-05 01:13:34.123760 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-05 01:13:34.123766 | orchestrator | Thursday 05 February 2026 01:07:15 +0000 (0:00:06.896) 0:01:55.095 ***** 2026-02-05 01:13:34.123772 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.123787 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.123793 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.123799 | orchestrator | 2026-02-05 01:13:34.123804 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-05 01:13:34.123810 | orchestrator | Thursday 05 February 2026 01:07:15 +0000 (0:00:00.299) 0:01:55.394 ***** 2026-02-05 01:13:34.123817 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-05 01:13:34.123823 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.123829 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-05 01:13:34.123844 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.123852 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-05 01:13:34.123856 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.123860 | orchestrator | 2026-02-05 01:13:34.123864 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-05 01:13:34.123868 | orchestrator | Thursday 05 February 2026 01:07:16 +0000 (0:00:00.556) 0:01:55.951 ***** 2026-02-05 01:13:34.123871 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.123875 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.123879 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.123883 | orchestrator | 2026-02-05 01:13:34.123887 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-05 01:13:34.123903 | orchestrator | Thursday 05 February 2026 01:07:16 +0000 (0:00:00.388) 0:01:56.339 ***** 2026-02-05 01:13:34.123909 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.123916 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.123924 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.123944 | orchestrator | 2026-02-05 01:13:34.123950 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-05 01:13:34.123957 | orchestrator | Thursday 05 February 2026 01:07:17 +0000 (0:00:00.920) 0:01:57.259 ***** 2026-02-05 01:13:34.123963 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.123969 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.123975 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.124057 | orchestrator | 2026-02-05 01:13:34.124068 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-05 01:13:34.124074 | orchestrator | Thursday 05 February 2026 01:07:19 +0000 (0:00:01.896) 0:01:59.156 ***** 2026-02-05 01:13:34.124080 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.124086 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.124091 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:34.124098 | orchestrator | 2026-02-05 01:13:34.124103 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-05 01:13:34.124109 | orchestrator | Thursday 05 February 2026 01:07:39 +0000 (0:00:19.607) 0:02:18.764 ***** 2026-02-05 01:13:34.124115 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.124129 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.124136 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:34.124143 | orchestrator | 2026-02-05 01:13:34.124149 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-05 01:13:34.124156 | orchestrator | Thursday 05 February 2026 01:07:51 +0000 (0:00:12.809) 0:02:31.573 ***** 2026-02-05 01:13:34.124160 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:34.124164 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.124167 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.124171 | orchestrator | 2026-02-05 01:13:34.124175 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-05 01:13:34.124179 | orchestrator | Thursday 05 February 2026 01:07:53 +0000 (0:00:01.175) 0:02:32.749 ***** 2026-02-05 01:13:34.124183 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.124186 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.124190 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.124194 | orchestrator | 2026-02-05 01:13:34.124198 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-05 01:13:34.124201 | orchestrator | Thursday 05 February 2026 01:08:06 +0000 (0:00:13.227) 0:02:45.976 ***** 2026-02-05 01:13:34.124205 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.124209 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.124213 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.124216 | orchestrator | 2026-02-05 01:13:34.124220 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-05 01:13:34.124224 | orchestrator | Thursday 05 February 2026 01:08:07 +0000 (0:00:01.075) 0:02:47.052 ***** 2026-02-05 01:13:34.124231 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.124235 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.124239 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.124243 | orchestrator | 2026-02-05 01:13:34.124246 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-05 01:13:34.124250 | orchestrator | 2026-02-05 01:13:34.124254 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-05 01:13:34.124258 | orchestrator | Thursday 05 February 2026 01:08:07 +0000 (0:00:00.425) 0:02:47.478 ***** 2026-02-05 01:13:34.124267 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:13:34.124343 | orchestrator | 2026-02-05 01:13:34.124348 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-05 01:13:34.124351 | orchestrator | Thursday 05 February 2026 01:08:08 +0000 (0:00:00.513) 0:02:47.991 ***** 2026-02-05 01:13:34.124355 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-05 01:13:34.124359 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-05 01:13:34.124363 | orchestrator | 2026-02-05 01:13:34.124367 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-05 01:13:34.124371 | orchestrator | Thursday 05 February 2026 01:08:12 +0000 (0:00:03.866) 0:02:51.857 ***** 2026-02-05 01:13:34.124374 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-05 01:13:34.124383 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-05 01:13:34.124388 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-05 01:13:34.124392 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-05 01:13:34.124396 | orchestrator | 2026-02-05 01:13:34.124399 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-05 01:13:34.124403 | orchestrator | Thursday 05 February 2026 01:08:18 +0000 (0:00:06.484) 0:02:58.342 ***** 2026-02-05 01:13:34.124407 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 01:13:34.124411 | orchestrator | 2026-02-05 01:13:34.124418 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-05 01:13:34.124422 | orchestrator | Thursday 05 February 2026 01:08:22 +0000 (0:00:03.320) 0:03:01.663 ***** 2026-02-05 01:13:34.124427 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-05 01:13:34.124433 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:13:34.124439 | orchestrator | 2026-02-05 01:13:34.124445 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-05 01:13:34.124451 | orchestrator | Thursday 05 February 2026 01:08:26 +0000 (0:00:04.274) 0:03:05.938 ***** 2026-02-05 01:13:34.124457 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 01:13:34.124463 | orchestrator | 2026-02-05 01:13:34.124470 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-05 01:13:34.124476 | orchestrator | Thursday 05 February 2026 01:08:29 +0000 (0:00:02.859) 0:03:08.797 ***** 2026-02-05 01:13:34.124482 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-05 01:13:34.124489 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-05 01:13:34.124495 | orchestrator | 2026-02-05 01:13:34.124501 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-05 01:13:34.124506 | orchestrator | Thursday 05 February 2026 01:08:37 +0000 (0:00:08.102) 0:03:16.900 ***** 2026-02-05 01:13:34.124514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:34.124534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:34.124543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:34.124555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.124562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.124568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.124574 | orchestrator | 2026-02-05 01:13:34.124580 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-05 01:13:34.124586 | orchestrator | Thursday 05 February 2026 01:08:38 +0000 (0:00:01.268) 0:03:18.169 ***** 2026-02-05 01:13:34.124595 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.124602 | orchestrator | 2026-02-05 01:13:34.124608 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-05 01:13:34.124615 | orchestrator | Thursday 05 February 2026 01:08:38 +0000 (0:00:00.124) 0:03:18.293 ***** 2026-02-05 01:13:34.124621 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.124628 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.124634 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.124641 | orchestrator | 2026-02-05 01:13:34.124645 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-05 01:13:34.124652 | orchestrator | Thursday 05 February 2026 01:08:38 +0000 (0:00:00.265) 0:03:18.558 ***** 2026-02-05 01:13:34.124657 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-05 01:13:34.124661 | orchestrator | 2026-02-05 01:13:34.124668 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-05 01:13:34.124671 | orchestrator | Thursday 05 February 2026 01:08:39 +0000 (0:00:00.663) 0:03:19.222 ***** 2026-02-05 01:13:34.124675 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.124679 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.124683 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.124687 | orchestrator | 2026-02-05 01:13:34.124690 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-05 01:13:34.124694 | orchestrator | Thursday 05 February 2026 01:08:40 +0000 (0:00:00.394) 0:03:19.617 ***** 2026-02-05 01:13:34.124698 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:13:34.124702 | orchestrator | 2026-02-05 01:13:34.124706 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-05 01:13:34.124710 | orchestrator | Thursday 05 February 2026 01:08:40 +0000 (0:00:00.487) 0:03:20.104 ***** 2026-02-05 01:13:34.124714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:34.124719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:34.124729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:34.124739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.124743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.124747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.124751 | orchestrator | 2026-02-05 01:13:34.124755 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-05 01:13:34.124759 | orchestrator | Thursday 05 February 2026 01:08:42 +0000 (0:00:02.097) 0:03:22.202 ***** 2026-02-05 01:13:34.124763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 01:13:34.124772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.124779 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.124784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 01:13:34.124790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.124796 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.124826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 01:13:34.124836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.124848 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.124855 | orchestrator | 2026-02-05 01:13:34.124862 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-05 01:13:34.124867 | orchestrator | Thursday 05 February 2026 01:08:43 +0000 (0:00:00.690) 0:03:22.893 ***** 2026-02-05 01:13:34.125380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 01:13:34.125403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.125410 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.125418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 01:13:34.125425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.125438 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.125455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 01:13:34.125464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.125470 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.125477 | orchestrator | 2026-02-05 01:13:34.125483 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-05 01:13:34.125490 | orchestrator | Thursday 05 February 2026 01:08:43 +0000 (0:00:00.685) 0:03:23.578 ***** 2026-02-05 01:13:34.125496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:34.125506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:34.125521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:34.125529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.125536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.125543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.125553 | orchestrator | 2026-02-05 01:13:34.125559 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-05 01:13:34.125565 | orchestrator | Thursday 05 February 2026 01:08:46 +0000 (0:00:02.478) 0:03:26.057 ***** 2026-02-05 01:13:34.125577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:34.125585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:34.125592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:34.125602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.125611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.125621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.125627 | orchestrator | 2026-02-05 01:13:34.125633 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-05 01:13:34.125639 | orchestrator | Thursday 05 February 2026 01:08:51 +0000 (0:00:05.042) 0:03:31.099 ***** 2026-02-05 01:13:34.125645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 01:13:34.125651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.125656 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.125674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 01:13:34.125689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.125696 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.125706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-05 01:13:34.125714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.125720 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.125727 | orchestrator | 2026-02-05 01:13:34.125733 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-05 01:13:34.125740 | orchestrator | Thursday 05 February 2026 01:08:52 +0000 (0:00:00.588) 0:03:31.688 ***** 2026-02-05 01:13:34.125746 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.125760 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:13:34.125767 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:13:34.125773 | orchestrator | 2026-02-05 01:13:34.125779 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-05 01:13:34.125786 | orchestrator | Thursday 05 February 2026 01:08:53 +0000 (0:00:01.589) 0:03:33.277 ***** 2026-02-05 01:13:34.125792 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.125798 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.125804 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.125810 | orchestrator | 2026-02-05 01:13:34.125817 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-05 01:13:34.125823 | orchestrator | Thursday 05 February 2026 01:08:53 +0000 (0:00:00.303) 0:03:33.580 ***** 2026-02-05 01:13:34.125833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:34.125844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:34.125852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-05 01:13:34.125861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.125865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.125871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.125875 | orchestrator | 2026-02-05 01:13:34.125879 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-05 01:13:34.125883 | orchestrator | Thursday 05 February 2026 01:08:55 +0000 (0:00:01.893) 0:03:35.474 ***** 2026-02-05 01:13:34.125887 | orchestrator | 2026-02-05 01:13:34.125893 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-05 01:13:34.125897 | orchestrator | Thursday 05 February 2026 01:08:56 +0000 (0:00:00.251) 0:03:35.725 ***** 2026-02-05 01:13:34.125900 | orchestrator | 2026-02-05 01:13:34.125904 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-05 01:13:34.125908 | orchestrator | Thursday 05 February 2026 01:08:56 +0000 (0:00:00.121) 0:03:35.847 ***** 2026-02-05 01:13:34.125912 | orchestrator | 2026-02-05 01:13:34.125915 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-05 01:13:34.125919 | orchestrator | Thursday 05 February 2026 01:08:56 +0000 (0:00:00.135) 0:03:35.983 ***** 2026-02-05 01:13:34.125923 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.125926 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:13:34.125930 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:13:34.125934 | orchestrator | 2026-02-05 01:13:34.125938 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-05 01:13:34.125941 | orchestrator | Thursday 05 February 2026 01:09:16 +0000 (0:00:20.603) 0:03:56.587 ***** 2026-02-05 01:13:34.125945 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.125949 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:13:34.125953 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:13:34.125956 | orchestrator | 2026-02-05 01:13:34.125960 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-05 01:13:34.125966 | orchestrator | 2026-02-05 01:13:34.125970 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-05 01:13:34.125974 | orchestrator | Thursday 05 February 2026 01:09:26 +0000 (0:00:09.964) 0:04:06.551 ***** 2026-02-05 01:13:34.125978 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:13:34.126090 | orchestrator | 2026-02-05 01:13:34.126109 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-05 01:13:34.126114 | orchestrator | Thursday 05 February 2026 01:09:28 +0000 (0:00:01.163) 0:04:07.715 ***** 2026-02-05 01:13:34.126119 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.126123 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:34.126128 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.126132 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.126137 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.126141 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.126178 | orchestrator | 2026-02-05 01:13:34.126188 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-05 01:13:34.126192 | orchestrator | Thursday 05 February 2026 01:09:28 +0000 (0:00:00.594) 0:04:08.309 ***** 2026-02-05 01:13:34.126197 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.126201 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.126206 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.126210 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:13:34.126214 | orchestrator | 2026-02-05 01:13:34.126219 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-05 01:13:34.126224 | orchestrator | Thursday 05 February 2026 01:09:29 +0000 (0:00:01.021) 0:04:09.331 ***** 2026-02-05 01:13:34.126228 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-05 01:13:34.126233 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-05 01:13:34.126238 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-05 01:13:34.126242 | orchestrator | 2026-02-05 01:13:34.126247 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-05 01:13:34.126251 | orchestrator | Thursday 05 February 2026 01:09:30 +0000 (0:00:00.679) 0:04:10.010 ***** 2026-02-05 01:13:34.126255 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-05 01:13:34.126260 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-05 01:13:34.126265 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-05 01:13:34.126269 | orchestrator | 2026-02-05 01:13:34.126273 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-05 01:13:34.126278 | orchestrator | Thursday 05 February 2026 01:09:31 +0000 (0:00:01.095) 0:04:11.105 ***** 2026-02-05 01:13:34.126282 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-05 01:13:34.126287 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.126291 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-05 01:13:34.126296 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:34.126300 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-05 01:13:34.126304 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.126309 | orchestrator | 2026-02-05 01:13:34.126313 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-05 01:13:34.126318 | orchestrator | Thursday 05 February 2026 01:09:32 +0000 (0:00:00.715) 0:04:11.821 ***** 2026-02-05 01:13:34.126322 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 01:13:34.126326 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 01:13:34.126331 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.126335 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 01:13:34.126361 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 01:13:34.126366 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.126373 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-05 01:13:34.126378 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-05 01:13:34.126382 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-05 01:13:34.126387 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-05 01:13:34.126394 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.126408 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-05 01:13:34.126417 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-05 01:13:34.126423 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-05 01:13:34.126430 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-05 01:13:34.126436 | orchestrator | 2026-02-05 01:13:34.126442 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-05 01:13:34.126448 | orchestrator | Thursday 05 February 2026 01:09:34 +0000 (0:00:02.045) 0:04:13.866 ***** 2026-02-05 01:13:34.126453 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.126459 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.126465 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.126471 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:34.126477 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:34.126484 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:34.126490 | orchestrator | 2026-02-05 01:13:34.126496 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-05 01:13:34.126502 | orchestrator | Thursday 05 February 2026 01:09:35 +0000 (0:00:01.092) 0:04:14.958 ***** 2026-02-05 01:13:34.126509 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.126515 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.126521 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.126528 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:34.126540 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:34.126547 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:34.126553 | orchestrator | 2026-02-05 01:13:34.126559 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-05 01:13:34.126564 | orchestrator | Thursday 05 February 2026 01:09:36 +0000 (0:00:01.545) 0:04:16.504 ***** 2026-02-05 01:13:34.126569 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126586 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126595 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126600 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126604 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126626 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126634 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126639 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126666 | orchestrator | 2026-02-05 01:13:34.126670 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-05 01:13:34.126674 | orchestrator | Thursday 05 February 2026 01:09:39 +0000 (0:00:02.144) 0:04:18.648 ***** 2026-02-05 01:13:34.126678 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:13:34.126683 | orchestrator | 2026-02-05 01:13:34.126687 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-05 01:13:34.126692 | orchestrator | Thursday 05 February 2026 01:09:40 +0000 (0:00:01.178) 0:04:19.826 ***** 2026-02-05 01:13:34.126700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126704 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126708 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126733 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126739 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126746 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126771 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126776 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126780 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.126787 | orchestrator | 2026-02-05 01:13:34.126791 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-05 01:13:34.126795 | orchestrator | Thursday 05 February 2026 01:09:43 +0000 (0:00:03.223) 0:04:23.050 ***** 2026-02-05 01:13:34.126799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:34.126803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:34.126809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.126813 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.126820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:34.126824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:34.126831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.126835 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.126839 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:34.126843 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:34.126853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.126857 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:34.126861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 01:13:34.126865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.126872 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.126876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 01:13:34.126881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.126884 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.126888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 01:13:34.126894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.126898 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.126902 | orchestrator | 2026-02-05 01:13:34.126908 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-05 01:13:34.126912 | orchestrator | Thursday 05 February 2026 01:09:45 +0000 (0:00:01.713) 0:04:24.763 ***** 2026-02-05 01:13:34.126917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:34.126923 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:34.126927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.126931 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.126935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:34.126941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:34.126948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:34.126955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.126959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:34.126963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.126967 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:34.126971 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.126975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 01:13:34.126981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.126999 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.127010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 01:13:34.127017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 01:13:34.127021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.127025 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.127029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.127033 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.127037 | orchestrator | 2026-02-05 01:13:34.127041 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-05 01:13:34.127045 | orchestrator | Thursday 05 February 2026 01:09:46 +0000 (0:00:01.824) 0:04:26.588 ***** 2026-02-05 01:13:34.127048 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.127052 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.127056 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.127060 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-05 01:13:34.127063 | orchestrator | 2026-02-05 01:13:34.127067 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-05 01:13:34.127071 | orchestrator | Thursday 05 February 2026 01:09:47 +0000 (0:00:00.829) 0:04:27.418 ***** 2026-02-05 01:13:34.127075 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 01:13:34.127078 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 01:13:34.127082 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 01:13:34.127086 | orchestrator | 2026-02-05 01:13:34.127090 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-05 01:13:34.127093 | orchestrator | Thursday 05 February 2026 01:09:48 +0000 (0:00:01.140) 0:04:28.558 ***** 2026-02-05 01:13:34.127097 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 01:13:34.127101 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-05 01:13:34.127105 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-05 01:13:34.127109 | orchestrator | 2026-02-05 01:13:34.127113 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-05 01:13:34.127116 | orchestrator | Thursday 05 February 2026 01:09:49 +0000 (0:00:00.915) 0:04:29.474 ***** 2026-02-05 01:13:34.127120 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:13:34.127127 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:13:34.127131 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:13:34.127135 | orchestrator | 2026-02-05 01:13:34.127140 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-05 01:13:34.127144 | orchestrator | Thursday 05 February 2026 01:09:50 +0000 (0:00:00.484) 0:04:29.958 ***** 2026-02-05 01:13:34.127148 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:13:34.127152 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:13:34.127155 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:13:34.127159 | orchestrator | 2026-02-05 01:13:34.127163 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-05 01:13:34.127167 | orchestrator | Thursday 05 February 2026 01:09:51 +0000 (0:00:00.728) 0:04:30.687 ***** 2026-02-05 01:13:34.127173 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-05 01:13:34.127177 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-05 01:13:34.127181 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-05 01:13:34.127184 | orchestrator | 2026-02-05 01:13:34.127188 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-05 01:13:34.127192 | orchestrator | Thursday 05 February 2026 01:09:52 +0000 (0:00:01.295) 0:04:31.982 ***** 2026-02-05 01:13:34.127196 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-05 01:13:34.127200 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-05 01:13:34.127203 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-05 01:13:34.127207 | orchestrator | 2026-02-05 01:13:34.127211 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-05 01:13:34.127215 | orchestrator | Thursday 05 February 2026 01:09:53 +0000 (0:00:01.039) 0:04:33.022 ***** 2026-02-05 01:13:34.127219 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-05 01:13:34.127223 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-05 01:13:34.127227 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-05 01:13:34.127230 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-05 01:13:34.127234 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-05 01:13:34.127238 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-05 01:13:34.127242 | orchestrator | 2026-02-05 01:13:34.127246 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-05 01:13:34.127250 | orchestrator | Thursday 05 February 2026 01:09:56 +0000 (0:00:03.481) 0:04:36.503 ***** 2026-02-05 01:13:34.127254 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.127258 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:34.127261 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.127265 | orchestrator | 2026-02-05 01:13:34.127269 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-05 01:13:34.127273 | orchestrator | Thursday 05 February 2026 01:09:57 +0000 (0:00:00.314) 0:04:36.818 ***** 2026-02-05 01:13:34.127277 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.127280 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:34.127284 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.127288 | orchestrator | 2026-02-05 01:13:34.127292 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-05 01:13:34.127296 | orchestrator | Thursday 05 February 2026 01:09:57 +0000 (0:00:00.480) 0:04:37.298 ***** 2026-02-05 01:13:34.127300 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:34.127303 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:34.127307 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:34.127311 | orchestrator | 2026-02-05 01:13:34.127315 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-05 01:13:34.127319 | orchestrator | Thursday 05 February 2026 01:09:58 +0000 (0:00:01.214) 0:04:38.513 ***** 2026-02-05 01:13:34.127323 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-05 01:13:34.127331 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-05 01:13:34.127335 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-05 01:13:34.127338 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-05 01:13:34.127342 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-05 01:13:34.127346 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-05 01:13:34.127350 | orchestrator | 2026-02-05 01:13:34.127354 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-05 01:13:34.127358 | orchestrator | Thursday 05 February 2026 01:10:02 +0000 (0:00:03.468) 0:04:41.981 ***** 2026-02-05 01:13:34.127362 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 01:13:34.127365 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 01:13:34.127369 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 01:13:34.127373 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-05 01:13:34.127377 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:34.127381 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-05 01:13:34.127385 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:34.127388 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-05 01:13:34.127392 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:34.127396 | orchestrator | 2026-02-05 01:13:34.127400 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-05 01:13:34.127404 | orchestrator | Thursday 05 February 2026 01:10:05 +0000 (0:00:03.285) 0:04:45.266 ***** 2026-02-05 01:13:34.127407 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.127411 | orchestrator | 2026-02-05 01:13:34.127417 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-05 01:13:34.127421 | orchestrator | Thursday 05 February 2026 01:10:05 +0000 (0:00:00.228) 0:04:45.495 ***** 2026-02-05 01:13:34.127425 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.127429 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:34.127433 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.127436 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.127440 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.127444 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.127448 | orchestrator | 2026-02-05 01:13:34.127454 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-05 01:13:34.127459 | orchestrator | Thursday 05 February 2026 01:10:06 +0000 (0:00:00.511) 0:04:46.006 ***** 2026-02-05 01:13:34.127463 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-05 01:13:34.127467 | orchestrator | 2026-02-05 01:13:34.127471 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-05 01:13:34.127474 | orchestrator | Thursday 05 February 2026 01:10:07 +0000 (0:00:00.659) 0:04:46.666 ***** 2026-02-05 01:13:34.127478 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.127482 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:34.127486 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.127490 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.127494 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.127498 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.127502 | orchestrator | 2026-02-05 01:13:34.127506 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-05 01:13:34.127509 | orchestrator | Thursday 05 February 2026 01:10:07 +0000 (0:00:00.645) 0:04:47.312 ***** 2026-02-05 01:13:34.127513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127520 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127525 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127548 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127553 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127557 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127579 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127586 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127591 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127595 | orchestrator | 2026-02-05 01:13:34.127598 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-05 01:13:34.127602 | orchestrator | Thursday 05 February 2026 01:10:11 +0000 (0:00:03.706) 0:04:51.018 ***** 2026-02-05 01:13:34.127608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:34.127615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:34.127622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:34.127626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:34.127630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:34.127634 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:34.127640 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127757 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127778 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.127814 | orchestrator | 2026-02-05 01:13:34.127818 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-05 01:13:34.127822 | orchestrator | Thursday 05 February 2026 01:10:17 +0000 (0:00:06.019) 0:04:57.038 ***** 2026-02-05 01:13:34.127826 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.127830 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:34.127833 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.127837 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.127841 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.127845 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.127848 | orchestrator | 2026-02-05 01:13:34.127852 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-05 01:13:34.127856 | orchestrator | Thursday 05 February 2026 01:10:18 +0000 (0:00:01.438) 0:04:58.476 ***** 2026-02-05 01:13:34.127860 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-05 01:13:34.127864 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-05 01:13:34.127868 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-05 01:13:34.127871 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-05 01:13:34.127875 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-05 01:13:34.127879 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.127883 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-05 01:13:34.127887 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-05 01:13:34.127891 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.127894 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-05 01:13:34.127898 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.127902 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-05 01:13:34.127906 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-05 01:13:34.127910 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-05 01:13:34.127914 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-05 01:13:34.127918 | orchestrator | 2026-02-05 01:13:34.127922 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-05 01:13:34.127925 | orchestrator | Thursday 05 February 2026 01:10:22 +0000 (0:00:03.920) 0:05:02.397 ***** 2026-02-05 01:13:34.127930 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.127933 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:34.127937 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.127941 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.127948 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.127952 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.127956 | orchestrator | 2026-02-05 01:13:34.127960 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-05 01:13:34.127963 | orchestrator | Thursday 05 February 2026 01:10:23 +0000 (0:00:00.568) 0:05:02.965 ***** 2026-02-05 01:13:34.127967 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-05 01:13:34.127971 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-05 01:13:34.127975 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-05 01:13:34.127979 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-05 01:13:34.127996 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-05 01:13:34.128001 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-05 01:13:34.128005 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-05 01:13:34.128011 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-05 01:13:34.128015 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-05 01:13:34.128019 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-05 01:13:34.128023 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.128027 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-05 01:13:34.128030 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.128034 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-05 01:13:34.128038 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.128042 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-05 01:13:34.128046 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-05 01:13:34.128049 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-05 01:13:34.128053 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-05 01:13:34.128057 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-05 01:13:34.128061 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-05 01:13:34.128065 | orchestrator | 2026-02-05 01:13:34.128069 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-05 01:13:34.128072 | orchestrator | Thursday 05 February 2026 01:10:28 +0000 (0:00:04.911) 0:05:07.877 ***** 2026-02-05 01:13:34.128076 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 01:13:34.128080 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 01:13:34.128084 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-05 01:13:34.128088 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-05 01:13:34.128092 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-05 01:13:34.128099 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-05 01:13:34.128103 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-05 01:13:34.128107 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-05 01:13:34.128111 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-05 01:13:34.128115 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 01:13:34.128118 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 01:13:34.128122 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-05 01:13:34.128126 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-05 01:13:34.128130 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.128134 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-05 01:13:34.128137 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-05 01:13:34.128141 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.128145 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-05 01:13:34.128149 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.128153 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-05 01:13:34.128157 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-05 01:13:34.128161 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-05 01:13:34.128164 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-05 01:13:34.128168 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-05 01:13:34.128172 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-05 01:13:34.128178 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-05 01:13:34.128182 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-05 01:13:34.128185 | orchestrator | 2026-02-05 01:13:34.128189 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-05 01:13:34.128193 | orchestrator | Thursday 05 February 2026 01:10:35 +0000 (0:00:07.077) 0:05:14.954 ***** 2026-02-05 01:13:34.128197 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.128201 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:34.128206 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.128210 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.128214 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.128218 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.128222 | orchestrator | 2026-02-05 01:13:34.128226 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-05 01:13:34.128229 | orchestrator | Thursday 05 February 2026 01:10:35 +0000 (0:00:00.487) 0:05:15.441 ***** 2026-02-05 01:13:34.128233 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.128237 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:34.128241 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.128245 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.128248 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.128252 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.128256 | orchestrator | 2026-02-05 01:13:34.128260 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-05 01:13:34.128264 | orchestrator | Thursday 05 February 2026 01:10:36 +0000 (0:00:00.668) 0:05:16.110 ***** 2026-02-05 01:13:34.128270 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.128274 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.128278 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.128281 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:34.128286 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:34.128290 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:34.128293 | orchestrator | 2026-02-05 01:13:34.128298 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-05 01:13:34.128302 | orchestrator | Thursday 05 February 2026 01:10:38 +0000 (0:00:01.811) 0:05:17.921 ***** 2026-02-05 01:13:34.128306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:34.128310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:34.128314 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.128322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:34.128326 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.128330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:34.128338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.128342 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:34.128346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-05 01:13:34.128350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-05 01:13:34.128356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.128361 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.128367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 01:13:34.128374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.128378 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.128382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 01:13:34.128386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.128390 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.128394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-05 01:13:34.128398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-05 01:13:34.128402 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.128406 | orchestrator | 2026-02-05 01:13:34.128410 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-05 01:13:34.128414 | orchestrator | Thursday 05 February 2026 01:10:39 +0000 (0:00:01.363) 0:05:19.285 ***** 2026-02-05 01:13:34.128418 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-05 01:13:34.128422 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-05 01:13:34.128426 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.128432 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-05 01:13:34.128438 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-05 01:13:34.128442 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:34.128446 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-05 01:13:34.128450 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-05 01:13:34.128454 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.128457 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-05 01:13:34.128463 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-05 01:13:34.128467 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.128471 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-05 01:13:34.128475 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-05 01:13:34.128479 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.128483 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-05 01:13:34.128486 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-05 01:13:34.128490 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.128494 | orchestrator | 2026-02-05 01:13:34.128498 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-05 01:13:34.128502 | orchestrator | Thursday 05 February 2026 01:10:40 +0000 (0:00:00.555) 0:05:19.841 ***** 2026-02-05 01:13:34.128506 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:34.128510 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:34.128514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-05 01:13:34.128522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:34.128528 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:34.128532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:34.128536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:34.128540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-05 01:13:34.128544 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-05 01:13:34.128548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.128556 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.128563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.128568 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.128572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.128575 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-05 01:13:34.128580 | orchestrator | 2026-02-05 01:13:34.128584 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-05 01:13:34.128587 | orchestrator | Thursday 05 February 2026 01:10:42 +0000 (0:00:02.668) 0:05:22.509 ***** 2026-02-05 01:13:34.128595 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.128599 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:34.128602 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.128606 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.128610 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.128614 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.128618 | orchestrator | 2026-02-05 01:13:34.128622 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-05 01:13:34.128625 | orchestrator | Thursday 05 February 2026 01:10:43 +0000 (0:00:00.504) 0:05:23.014 ***** 2026-02-05 01:13:34.128629 | orchestrator | 2026-02-05 01:13:34.128633 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-05 01:13:34.128637 | orchestrator | Thursday 05 February 2026 01:10:43 +0000 (0:00:00.233) 0:05:23.247 ***** 2026-02-05 01:13:34.128641 | orchestrator | 2026-02-05 01:13:34.128645 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-05 01:13:34.128648 | orchestrator | Thursday 05 February 2026 01:10:43 +0000 (0:00:00.120) 0:05:23.368 ***** 2026-02-05 01:13:34.128652 | orchestrator | 2026-02-05 01:13:34.128656 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-05 01:13:34.128660 | orchestrator | Thursday 05 February 2026 01:10:43 +0000 (0:00:00.121) 0:05:23.489 ***** 2026-02-05 01:13:34.128664 | orchestrator | 2026-02-05 01:13:34.128668 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-05 01:13:34.128674 | orchestrator | Thursday 05 February 2026 01:10:44 +0000 (0:00:00.118) 0:05:23.607 ***** 2026-02-05 01:13:34.128678 | orchestrator | 2026-02-05 01:13:34.128682 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-05 01:13:34.128686 | orchestrator | Thursday 05 February 2026 01:10:44 +0000 (0:00:00.116) 0:05:23.724 ***** 2026-02-05 01:13:34.128690 | orchestrator | 2026-02-05 01:13:34.128693 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-05 01:13:34.128700 | orchestrator | Thursday 05 February 2026 01:10:44 +0000 (0:00:00.123) 0:05:23.847 ***** 2026-02-05 01:13:34.128704 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:13:34.128708 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:13:34.128712 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.128716 | orchestrator | 2026-02-05 01:13:34.128720 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-05 01:13:34.128724 | orchestrator | Thursday 05 February 2026 01:10:54 +0000 (0:00:10.316) 0:05:34.163 ***** 2026-02-05 01:13:34.128727 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.128731 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:13:34.128735 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:13:34.128739 | orchestrator | 2026-02-05 01:13:34.128743 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-05 01:13:34.128747 | orchestrator | Thursday 05 February 2026 01:11:05 +0000 (0:00:10.803) 0:05:44.967 ***** 2026-02-05 01:13:34.128750 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:34.128754 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:34.128758 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:34.128762 | orchestrator | 2026-02-05 01:13:34.128766 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-05 01:13:34.128770 | orchestrator | Thursday 05 February 2026 01:11:26 +0000 (0:00:21.092) 0:06:06.059 ***** 2026-02-05 01:13:34.128774 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:34.128778 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:34.128782 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:34.128785 | orchestrator | 2026-02-05 01:13:34.128789 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-05 01:13:34.128793 | orchestrator | Thursday 05 February 2026 01:11:56 +0000 (0:00:29.794) 0:06:35.853 ***** 2026-02-05 01:13:34.128797 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:34.128801 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:34.128809 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:34.128813 | orchestrator | 2026-02-05 01:13:34.128817 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-05 01:13:34.128821 | orchestrator | Thursday 05 February 2026 01:11:57 +0000 (0:00:01.032) 0:06:36.886 ***** 2026-02-05 01:13:34.128825 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:34.128829 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:34.128833 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:34.128836 | orchestrator | 2026-02-05 01:13:34.128840 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-05 01:13:34.128844 | orchestrator | Thursday 05 February 2026 01:11:58 +0000 (0:00:00.831) 0:06:37.718 ***** 2026-02-05 01:13:34.128848 | orchestrator | changed: [testbed-node-5] 2026-02-05 01:13:34.128852 | orchestrator | changed: [testbed-node-4] 2026-02-05 01:13:34.128855 | orchestrator | changed: [testbed-node-3] 2026-02-05 01:13:34.128859 | orchestrator | 2026-02-05 01:13:34.128863 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-05 01:13:34.128867 | orchestrator | Thursday 05 February 2026 01:12:23 +0000 (0:00:25.213) 0:07:02.932 ***** 2026-02-05 01:13:34.128871 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.128874 | orchestrator | 2026-02-05 01:13:34.128878 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-05 01:13:34.128882 | orchestrator | Thursday 05 February 2026 01:12:23 +0000 (0:00:00.116) 0:07:03.049 ***** 2026-02-05 01:13:34.128886 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.128889 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.128893 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.128897 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.128901 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.128905 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-05 01:13:34.128910 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-05 01:13:34.128913 | orchestrator | 2026-02-05 01:13:34.128917 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-05 01:13:34.128921 | orchestrator | Thursday 05 February 2026 01:12:45 +0000 (0:00:22.369) 0:07:25.418 ***** 2026-02-05 01:13:34.128925 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:34.128929 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.128933 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.128937 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.128940 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.128944 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.128949 | orchestrator | 2026-02-05 01:13:34.128952 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-05 01:13:34.128956 | orchestrator | Thursday 05 February 2026 01:12:53 +0000 (0:00:07.971) 0:07:33.389 ***** 2026-02-05 01:13:34.128960 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.128964 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.128968 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.128972 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.128975 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.128979 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-02-05 01:13:34.128995 | orchestrator | 2026-02-05 01:13:34.129002 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-05 01:13:34.129008 | orchestrator | Thursday 05 February 2026 01:12:57 +0000 (0:00:03.405) 0:07:36.795 ***** 2026-02-05 01:13:34.129016 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-05 01:13:34.129023 | orchestrator | 2026-02-05 01:13:34.129029 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-05 01:13:34.129039 | orchestrator | Thursday 05 February 2026 01:13:11 +0000 (0:00:14.682) 0:07:51.477 ***** 2026-02-05 01:13:34.129047 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-05 01:13:34.129051 | orchestrator | 2026-02-05 01:13:34.129054 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-05 01:13:34.129058 | orchestrator | Thursday 05 February 2026 01:13:13 +0000 (0:00:01.215) 0:07:52.693 ***** 2026-02-05 01:13:34.129062 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:34.129066 | orchestrator | 2026-02-05 01:13:34.129073 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-05 01:13:34.129078 | orchestrator | Thursday 05 February 2026 01:13:14 +0000 (0:00:01.195) 0:07:53.888 ***** 2026-02-05 01:13:34.129081 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-05 01:13:34.129085 | orchestrator | 2026-02-05 01:13:34.129089 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-05 01:13:34.129093 | orchestrator | Thursday 05 February 2026 01:13:27 +0000 (0:00:12.808) 0:08:06.697 ***** 2026-02-05 01:13:34.129097 | orchestrator | ok: [testbed-node-3] 2026-02-05 01:13:34.129101 | orchestrator | ok: [testbed-node-4] 2026-02-05 01:13:34.129105 | orchestrator | ok: [testbed-node-5] 2026-02-05 01:13:34.129108 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:13:34.129112 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:13:34.129116 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:13:34.129120 | orchestrator | 2026-02-05 01:13:34.129123 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-05 01:13:34.129127 | orchestrator | 2026-02-05 01:13:34.129131 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-05 01:13:34.129135 | orchestrator | Thursday 05 February 2026 01:13:28 +0000 (0:00:01.574) 0:08:08.272 ***** 2026-02-05 01:13:34.129138 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:13:34.129142 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:13:34.129146 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:13:34.129150 | orchestrator | 2026-02-05 01:13:34.129153 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-05 01:13:34.129157 | orchestrator | 2026-02-05 01:13:34.129161 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-05 01:13:34.129165 | orchestrator | Thursday 05 February 2026 01:13:29 +0000 (0:00:00.879) 0:08:09.152 ***** 2026-02-05 01:13:34.129169 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.129173 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.129176 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.129180 | orchestrator | 2026-02-05 01:13:34.129184 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-05 01:13:34.129188 | orchestrator | 2026-02-05 01:13:34.129191 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-05 01:13:34.129195 | orchestrator | Thursday 05 February 2026 01:13:30 +0000 (0:00:00.732) 0:08:09.884 ***** 2026-02-05 01:13:34.129199 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-05 01:13:34.129203 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-05 01:13:34.129206 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-05 01:13:34.129210 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-05 01:13:34.129214 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-05 01:13:34.129218 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-05 01:13:34.129222 | orchestrator | skipping: [testbed-node-3] 2026-02-05 01:13:34.129225 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-05 01:13:34.129229 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-05 01:13:34.129233 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-05 01:13:34.129237 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-05 01:13:34.129241 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-05 01:13:34.129244 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-05 01:13:34.129251 | orchestrator | skipping: [testbed-node-4] 2026-02-05 01:13:34.129255 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-05 01:13:34.129258 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-05 01:13:34.129262 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-05 01:13:34.129266 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-05 01:13:34.129270 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-05 01:13:34.129273 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-05 01:13:34.129277 | orchestrator | skipping: [testbed-node-5] 2026-02-05 01:13:34.129281 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-05 01:13:34.129285 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-05 01:13:34.129289 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-05 01:13:34.129293 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-05 01:13:34.129296 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-05 01:13:34.129300 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-05 01:13:34.129304 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.129307 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-05 01:13:34.129311 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-05 01:13:34.129315 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-05 01:13:34.129319 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-05 01:13:34.129323 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-05 01:13:34.129326 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-05 01:13:34.129330 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.129336 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-05 01:13:34.129340 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-05 01:13:34.129344 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-05 01:13:34.129348 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-05 01:13:34.129352 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-05 01:13:34.129356 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-05 01:13:34.129359 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.129363 | orchestrator | 2026-02-05 01:13:34.129369 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-05 01:13:34.129373 | orchestrator | 2026-02-05 01:13:34.129377 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-05 01:13:34.129381 | orchestrator | Thursday 05 February 2026 01:13:31 +0000 (0:00:01.316) 0:08:11.201 ***** 2026-02-05 01:13:34.129385 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-05 01:13:34.129389 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-05 01:13:34.129393 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.129397 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-05 01:13:34.129400 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-05 01:13:34.129404 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.129408 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-05 01:13:34.129412 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-05 01:13:34.129416 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.129419 | orchestrator | 2026-02-05 01:13:34.129423 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-05 01:13:34.129427 | orchestrator | 2026-02-05 01:13:34.129431 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-05 01:13:34.129437 | orchestrator | Thursday 05 February 2026 01:13:32 +0000 (0:00:00.535) 0:08:11.736 ***** 2026-02-05 01:13:34.129441 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.129445 | orchestrator | 2026-02-05 01:13:34.129449 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-05 01:13:34.129452 | orchestrator | 2026-02-05 01:13:34.129456 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-05 01:13:34.129460 | orchestrator | Thursday 05 February 2026 01:13:33 +0000 (0:00:01.120) 0:08:12.857 ***** 2026-02-05 01:13:34.129464 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:13:34.129468 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:13:34.129472 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:13:34.129475 | orchestrator | 2026-02-05 01:13:34.129479 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:13:34.129483 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:13:34.129487 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-05 01:13:34.129491 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-05 01:13:34.129495 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-05 01:13:34.129499 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-05 01:13:34.129503 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-02-05 01:13:34.129507 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-05 01:13:34.129511 | orchestrator | 2026-02-05 01:13:34.129514 | orchestrator | 2026-02-05 01:13:34.129518 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:13:34.129522 | orchestrator | Thursday 05 February 2026 01:13:33 +0000 (0:00:00.425) 0:08:13.282 ***** 2026-02-05 01:13:34.129526 | orchestrator | =============================================================================== 2026-02-05 01:13:34.129530 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.96s 2026-02-05 01:13:34.129534 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 29.79s 2026-02-05 01:13:34.129537 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.21s 2026-02-05 01:13:34.129541 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.37s 2026-02-05 01:13:34.129578 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.09s 2026-02-05 01:13:34.129583 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 20.60s 2026-02-05 01:13:34.129589 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.61s 2026-02-05 01:13:34.129597 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.82s 2026-02-05 01:13:34.129606 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.05s 2026-02-05 01:13:34.129612 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.68s 2026-02-05 01:13:34.129618 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.91s 2026-02-05 01:13:34.129627 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.23s 2026-02-05 01:13:34.129634 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.81s 2026-02-05 01:13:34.129640 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.81s 2026-02-05 01:13:34.129649 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 10.80s 2026-02-05 01:13:34.129654 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 10.32s 2026-02-05 01:13:34.129673 | orchestrator | nova : Restart nova-api container --------------------------------------- 9.96s 2026-02-05 01:13:34.129681 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.10s 2026-02-05 01:13:34.129687 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 7.97s 2026-02-05 01:13:34.129694 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.08s 2026-02-05 01:13:34.129700 | orchestrator | 2026-02-05 01:13:34 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:34.129707 | orchestrator | 2026-02-05 01:13:34 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:37.173086 | orchestrator | 2026-02-05 01:13:37 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:37.173175 | orchestrator | 2026-02-05 01:13:37 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:40.220477 | orchestrator | 2026-02-05 01:13:40 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:40.220539 | orchestrator | 2026-02-05 01:13:40 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:43.265191 | orchestrator | 2026-02-05 01:13:43 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:43.265254 | orchestrator | 2026-02-05 01:13:43 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:46.308948 | orchestrator | 2026-02-05 01:13:46 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:46.309027 | orchestrator | 2026-02-05 01:13:46 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:49.347188 | orchestrator | 2026-02-05 01:13:49 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:49.347299 | orchestrator | 2026-02-05 01:13:49 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:52.391197 | orchestrator | 2026-02-05 01:13:52 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:52.391245 | orchestrator | 2026-02-05 01:13:52 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:55.436920 | orchestrator | 2026-02-05 01:13:55 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:55.437010 | orchestrator | 2026-02-05 01:13:55 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:13:58.478707 | orchestrator | 2026-02-05 01:13:58 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:13:58.478760 | orchestrator | 2026-02-05 01:13:58 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:14:01.515513 | orchestrator | 2026-02-05 01:14:01 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:14:01.515586 | orchestrator | 2026-02-05 01:14:01 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:14:04.555473 | orchestrator | 2026-02-05 01:14:04 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state STARTED 2026-02-05 01:14:04.555542 | orchestrator | 2026-02-05 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-02-05 01:14:07.598381 | orchestrator | 2026-02-05 01:14:07 | INFO  | Task b9222bdc-caa4-4467-b683-3d954619b1d1 is in state SUCCESS 2026-02-05 01:14:07.600086 | orchestrator | 2026-02-05 01:14:07.600137 | orchestrator | 2026-02-05 01:14:07.600150 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-05 01:14:07.600162 | orchestrator | 2026-02-05 01:14:07.600173 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-05 01:14:07.600208 | orchestrator | Thursday 05 February 2026 01:09:28 +0000 (0:00:00.296) 0:00:00.296 ***** 2026-02-05 01:14:07.600220 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:07.600232 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:14:07.600242 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:14:07.600253 | orchestrator | 2026-02-05 01:14:07.600264 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-05 01:14:07.600276 | orchestrator | Thursday 05 February 2026 01:09:28 +0000 (0:00:00.289) 0:00:00.586 ***** 2026-02-05 01:14:07.600326 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-05 01:14:07.600339 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-05 01:14:07.600350 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-05 01:14:07.600443 | orchestrator | 2026-02-05 01:14:07.600455 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-05 01:14:07.600466 | orchestrator | 2026-02-05 01:14:07.600478 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-05 01:14:07.600502 | orchestrator | Thursday 05 February 2026 01:09:29 +0000 (0:00:00.431) 0:00:01.017 ***** 2026-02-05 01:14:07.600513 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:14:07.600525 | orchestrator | 2026-02-05 01:14:07.600536 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-05 01:14:07.600547 | orchestrator | Thursday 05 February 2026 01:09:29 +0000 (0:00:00.627) 0:00:01.645 ***** 2026-02-05 01:14:07.600558 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-05 01:14:07.600569 | orchestrator | 2026-02-05 01:14:07.600580 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-05 01:14:07.600590 | orchestrator | Thursday 05 February 2026 01:09:33 +0000 (0:00:03.414) 0:00:05.060 ***** 2026-02-05 01:14:07.600602 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-05 01:14:07.600613 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-05 01:14:07.600624 | orchestrator | 2026-02-05 01:14:07.600635 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-05 01:14:07.600646 | orchestrator | Thursday 05 February 2026 01:09:39 +0000 (0:00:05.978) 0:00:11.038 ***** 2026-02-05 01:14:07.600658 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-05 01:14:07.600672 | orchestrator | 2026-02-05 01:14:07.600686 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-05 01:14:07.600698 | orchestrator | Thursday 05 February 2026 01:09:42 +0000 (0:00:02.962) 0:00:14.000 ***** 2026-02-05 01:14:07.600710 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-05 01:14:07.600723 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-05 01:14:07.600736 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-05 01:14:07.600749 | orchestrator | 2026-02-05 01:14:07.600767 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-05 01:14:07.600792 | orchestrator | Thursday 05 February 2026 01:09:50 +0000 (0:00:08.424) 0:00:22.425 ***** 2026-02-05 01:14:07.600818 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-05 01:14:07.600837 | orchestrator | 2026-02-05 01:14:07.600856 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-05 01:14:07.600875 | orchestrator | Thursday 05 February 2026 01:09:53 +0000 (0:00:03.323) 0:00:25.749 ***** 2026-02-05 01:14:07.600894 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-05 01:14:07.600915 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-05 01:14:07.600934 | orchestrator | 2026-02-05 01:14:07.601068 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-05 01:14:07.601114 | orchestrator | Thursday 05 February 2026 01:10:00 +0000 (0:00:07.056) 0:00:32.806 ***** 2026-02-05 01:14:07.601136 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-05 01:14:07.601154 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-05 01:14:07.601172 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-05 01:14:07.601190 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-05 01:14:07.601206 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-05 01:14:07.601222 | orchestrator | 2026-02-05 01:14:07.601238 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-05 01:14:07.601257 | orchestrator | Thursday 05 February 2026 01:10:17 +0000 (0:00:16.325) 0:00:49.131 ***** 2026-02-05 01:14:07.601275 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:14:07.601296 | orchestrator | 2026-02-05 01:14:07.601315 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-05 01:14:07.601335 | orchestrator | Thursday 05 February 2026 01:10:17 +0000 (0:00:00.591) 0:00:49.722 ***** 2026-02-05 01:14:07.601354 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.601373 | orchestrator | 2026-02-05 01:14:07.601391 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-05 01:14:07.601408 | orchestrator | Thursday 05 February 2026 01:10:23 +0000 (0:00:05.551) 0:00:55.273 ***** 2026-02-05 01:14:07.601425 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.601441 | orchestrator | 2026-02-05 01:14:07.601460 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-05 01:14:07.601501 | orchestrator | Thursday 05 February 2026 01:10:28 +0000 (0:00:04.658) 0:00:59.932 ***** 2026-02-05 01:14:07.601520 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:07.601539 | orchestrator | 2026-02-05 01:14:07.601550 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-05 01:14:07.601561 | orchestrator | Thursday 05 February 2026 01:10:31 +0000 (0:00:03.229) 0:01:03.162 ***** 2026-02-05 01:14:07.601572 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-05 01:14:07.601582 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-05 01:14:07.601592 | orchestrator | 2026-02-05 01:14:07.601603 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-05 01:14:07.601620 | orchestrator | Thursday 05 February 2026 01:10:41 +0000 (0:00:09.743) 0:01:12.905 ***** 2026-02-05 01:14:07.601648 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-05 01:14:07.601666 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-05 01:14:07.601693 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-05 01:14:07.601710 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-05 01:14:07.601723 | orchestrator | 2026-02-05 01:14:07.601739 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-05 01:14:07.601755 | orchestrator | Thursday 05 February 2026 01:10:57 +0000 (0:00:16.277) 0:01:29.183 ***** 2026-02-05 01:14:07.601771 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.601787 | orchestrator | 2026-02-05 01:14:07.601803 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-05 01:14:07.601818 | orchestrator | Thursday 05 February 2026 01:11:01 +0000 (0:00:04.388) 0:01:33.572 ***** 2026-02-05 01:14:07.601834 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.601849 | orchestrator | 2026-02-05 01:14:07.601864 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-05 01:14:07.601891 | orchestrator | Thursday 05 February 2026 01:11:06 +0000 (0:00:04.576) 0:01:38.148 ***** 2026-02-05 01:14:07.601912 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:14:07.601929 | orchestrator | 2026-02-05 01:14:07.601966 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-05 01:14:07.601992 | orchestrator | Thursday 05 February 2026 01:11:06 +0000 (0:00:00.203) 0:01:38.351 ***** 2026-02-05 01:14:07.602013 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:07.602095 | orchestrator | 2026-02-05 01:14:07.602114 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-05 01:14:07.602131 | orchestrator | Thursday 05 February 2026 01:11:10 +0000 (0:00:04.290) 0:01:42.642 ***** 2026-02-05 01:14:07.602149 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:14:07.602167 | orchestrator | 2026-02-05 01:14:07.602183 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-05 01:14:07.602201 | orchestrator | Thursday 05 February 2026 01:11:11 +0000 (0:00:01.106) 0:01:43.749 ***** 2026-02-05 01:14:07.602218 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.602235 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:07.602252 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:07.602270 | orchestrator | 2026-02-05 01:14:07.602288 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-05 01:14:07.602306 | orchestrator | Thursday 05 February 2026 01:11:16 +0000 (0:00:04.758) 0:01:48.507 ***** 2026-02-05 01:14:07.602324 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:07.602341 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.602358 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:07.602368 | orchestrator | 2026-02-05 01:14:07.602378 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-05 01:14:07.602387 | orchestrator | Thursday 05 February 2026 01:11:21 +0000 (0:00:05.016) 0:01:53.523 ***** 2026-02-05 01:14:07.602397 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.602407 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:07.602416 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:07.602426 | orchestrator | 2026-02-05 01:14:07.602436 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-05 01:14:07.602445 | orchestrator | Thursday 05 February 2026 01:11:22 +0000 (0:00:00.713) 0:01:54.237 ***** 2026-02-05 01:14:07.602455 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:14:07.602465 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:07.602474 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:14:07.602484 | orchestrator | 2026-02-05 01:14:07.602493 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-05 01:14:07.602503 | orchestrator | Thursday 05 February 2026 01:11:24 +0000 (0:00:01.691) 0:01:55.929 ***** 2026-02-05 01:14:07.602513 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.602522 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:07.602532 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:07.602542 | orchestrator | 2026-02-05 01:14:07.602551 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-05 01:14:07.602561 | orchestrator | Thursday 05 February 2026 01:11:25 +0000 (0:00:01.115) 0:01:57.045 ***** 2026-02-05 01:14:07.602571 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:07.602580 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.602590 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:07.602599 | orchestrator | 2026-02-05 01:14:07.602609 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-05 01:14:07.602619 | orchestrator | Thursday 05 February 2026 01:11:26 +0000 (0:00:01.057) 0:01:58.102 ***** 2026-02-05 01:14:07.602628 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.602638 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:07.602648 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:07.602657 | orchestrator | 2026-02-05 01:14:07.602678 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-05 01:14:07.602697 | orchestrator | Thursday 05 February 2026 01:11:27 +0000 (0:00:01.683) 0:01:59.786 ***** 2026-02-05 01:14:07.602707 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.602717 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:07.602727 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:07.602737 | orchestrator | 2026-02-05 01:14:07.602746 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-05 01:14:07.602756 | orchestrator | Thursday 05 February 2026 01:11:29 +0000 (0:00:01.508) 0:02:01.295 ***** 2026-02-05 01:14:07.602786 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:07.602796 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:14:07.602806 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:14:07.602816 | orchestrator | 2026-02-05 01:14:07.602826 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-05 01:14:07.602835 | orchestrator | Thursday 05 February 2026 01:11:30 +0000 (0:00:00.772) 0:02:02.067 ***** 2026-02-05 01:14:07.602845 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:14:07.602855 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:14:07.602914 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:07.602925 | orchestrator | 2026-02-05 01:14:07.602934 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-05 01:14:07.603041 | orchestrator | Thursday 05 February 2026 01:11:32 +0000 (0:00:02.465) 0:02:04.533 ***** 2026-02-05 01:14:07.603059 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:14:07.603069 | orchestrator | 2026-02-05 01:14:07.603078 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-05 01:14:07.603088 | orchestrator | Thursday 05 February 2026 01:11:33 +0000 (0:00:00.516) 0:02:05.049 ***** 2026-02-05 01:14:07.603098 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:07.603107 | orchestrator | 2026-02-05 01:14:07.603117 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-05 01:14:07.603126 | orchestrator | Thursday 05 February 2026 01:11:36 +0000 (0:00:03.762) 0:02:08.811 ***** 2026-02-05 01:14:07.603136 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:07.603145 | orchestrator | 2026-02-05 01:14:07.603155 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-05 01:14:07.603164 | orchestrator | Thursday 05 February 2026 01:11:40 +0000 (0:00:03.217) 0:02:12.029 ***** 2026-02-05 01:14:07.603174 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-05 01:14:07.603184 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-05 01:14:07.603193 | orchestrator | 2026-02-05 01:14:07.603203 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-05 01:14:07.603212 | orchestrator | Thursday 05 February 2026 01:11:47 +0000 (0:00:06.877) 0:02:18.906 ***** 2026-02-05 01:14:07.603222 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:07.603232 | orchestrator | 2026-02-05 01:14:07.603241 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-05 01:14:07.603251 | orchestrator | Thursday 05 February 2026 01:11:50 +0000 (0:00:03.203) 0:02:22.109 ***** 2026-02-05 01:14:07.603260 | orchestrator | ok: [testbed-node-0] 2026-02-05 01:14:07.603270 | orchestrator | ok: [testbed-node-1] 2026-02-05 01:14:07.603280 | orchestrator | ok: [testbed-node-2] 2026-02-05 01:14:07.603289 | orchestrator | 2026-02-05 01:14:07.603299 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-05 01:14:07.603308 | orchestrator | Thursday 05 February 2026 01:11:50 +0000 (0:00:00.328) 0:02:22.438 ***** 2026-02-05 01:14:07.603322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:07.603355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:07.603371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:07.603382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:07.603394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:07.603404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:07.603415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.603430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.603447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.603461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.603472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.603483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.603493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:07.603509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:07.603520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:07.603530 | orchestrator | 2026-02-05 01:14:07.603540 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-05 01:14:07.603550 | orchestrator | Thursday 05 February 2026 01:11:52 +0000 (0:00:02.259) 0:02:24.697 ***** 2026-02-05 01:14:07.603560 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:14:07.603571 | orchestrator | 2026-02-05 01:14:07.603585 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-05 01:14:07.603595 | orchestrator | Thursday 05 February 2026 01:11:53 +0000 (0:00:00.136) 0:02:24.834 ***** 2026-02-05 01:14:07.603605 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:14:07.603615 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:14:07.603624 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:14:07.603634 | orchestrator | 2026-02-05 01:14:07.603643 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-05 01:14:07.603653 | orchestrator | Thursday 05 February 2026 01:11:53 +0000 (0:00:00.477) 0:02:25.311 ***** 2026-02-05 01:14:07.603667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 01:14:07.603678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 01:14:07.603689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 01:14:07.603705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 01:14:07.603715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:14:07.603725 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:14:07.603742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 01:14:07.603762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 01:14:07.603772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 01:14:07.603788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 01:14:07.603799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:14:07.603809 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:14:07.603820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 01:14:07.603837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 01:14:07.603847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 01:14:07.603862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 01:14:07.603879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:14:07.603891 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:14:07.603908 | orchestrator | 2026-02-05 01:14:07.603920 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-05 01:14:07.603930 | orchestrator | Thursday 05 February 2026 01:11:54 +0000 (0:00:00.669) 0:02:25.980 ***** 2026-02-05 01:14:07.603940 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-05 01:14:07.603971 | orchestrator | 2026-02-05 01:14:07.603982 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-05 01:14:07.603995 | orchestrator | Thursday 05 February 2026 01:11:54 +0000 (0:00:00.520) 0:02:26.501 ***** 2026-02-05 01:14:07.604006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:07.604023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:07.604039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:07.604055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:07.604066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:07.604076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:07.604086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.604096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.604112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.604127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.604142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.604152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.604162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:07.604173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:07.604187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:07.604198 | orchestrator | 2026-02-05 01:14:07.604208 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-05 01:14:07.604218 | orchestrator | Thursday 05 February 2026 01:12:00 +0000 (0:00:05.378) 0:02:31.879 ***** 2026-02-05 01:14:07.604232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 01:14:07.604248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 01:14:07.604258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 01:14:07.604268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 01:14:07.604279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:14:07.604288 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:14:07.604305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 01:14:07.604315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 01:14:07.604334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 01:14:07.604344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 01:14:07.604355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:14:07.604365 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:14:07.604375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 01:14:07.604385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 01:14:07.604401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 01:14:07.604420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 01:14:07.604431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:14:07.604441 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:14:07.604454 | orchestrator | 2026-02-05 01:14:07.604471 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-05 01:14:07.604497 | orchestrator | Thursday 05 February 2026 01:12:01 +0000 (0:00:01.146) 0:02:33.026 ***** 2026-02-05 01:14:07.604516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 01:14:07.604533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 01:14:07.604550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 01:14:07.604577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 01:14:07.604619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:14:07.604638 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:14:07.604656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 01:14:07.604673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 01:14:07.604687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 01:14:07.604697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 01:14:07.604721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:14:07.604736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-05 01:14:07.604746 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:14:07.604757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-05 01:14:07.604767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-05 01:14:07.604778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-05 01:14:07.604788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-05 01:14:07.604798 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:14:07.604814 | orchestrator | 2026-02-05 01:14:07.604824 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-05 01:14:07.604834 | orchestrator | Thursday 05 February 2026 01:12:02 +0000 (0:00:01.289) 0:02:34.316 ***** 2026-02-05 01:14:07.604851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:07.604866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:07.604877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:07.604887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:07.604898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:07.604914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:07.605313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.605350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.605366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.605380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.605395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.605411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.605447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:07.605462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:07.605483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:07.605499 | orchestrator | 2026-02-05 01:14:07.605514 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-05 01:14:07.605528 | orchestrator | Thursday 05 February 2026 01:12:07 +0000 (0:00:04.767) 0:02:39.083 ***** 2026-02-05 01:14:07.605542 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-05 01:14:07.605558 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-05 01:14:07.605572 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-05 01:14:07.605586 | orchestrator | 2026-02-05 01:14:07.605597 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-05 01:14:07.605605 | orchestrator | Thursday 05 February 2026 01:12:09 +0000 (0:00:02.247) 0:02:41.331 ***** 2026-02-05 01:14:07.605613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:07.605628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:07.605643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:07.605656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:07.605665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:07.605674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:07.605682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.605695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.605704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.605716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.605729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.605738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.605746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:07.605757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:07.605777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:07.605788 | orchestrator | 2026-02-05 01:14:07.605799 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-05 01:14:07.605810 | orchestrator | Thursday 05 February 2026 01:12:27 +0000 (0:00:17.969) 0:02:59.301 ***** 2026-02-05 01:14:07.605821 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.605836 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:07.605855 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:07.605868 | orchestrator | 2026-02-05 01:14:07.605880 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-05 01:14:07.605893 | orchestrator | Thursday 05 February 2026 01:12:29 +0000 (0:00:01.622) 0:03:00.923 ***** 2026-02-05 01:14:07.605906 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-05 01:14:07.605917 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-05 01:14:07.605938 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-05 01:14:07.605971 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-05 01:14:07.605985 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-05 01:14:07.605999 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-05 01:14:07.606013 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-05 01:14:07.606048 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-05 01:14:07.606057 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-05 01:14:07.606065 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-05 01:14:07.606073 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-05 01:14:07.606081 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-05 01:14:07.606089 | orchestrator | 2026-02-05 01:14:07.606101 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-05 01:14:07.606114 | orchestrator | Thursday 05 February 2026 01:12:34 +0000 (0:00:05.076) 0:03:06.000 ***** 2026-02-05 01:14:07.606125 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-05 01:14:07.606143 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-05 01:14:07.606155 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-05 01:14:07.606177 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-05 01:14:07.606192 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-05 01:14:07.606205 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-05 01:14:07.606219 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-05 01:14:07.606231 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-05 01:14:07.606245 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-05 01:14:07.606258 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-05 01:14:07.606286 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-05 01:14:07.606300 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-05 01:14:07.606314 | orchestrator | 2026-02-05 01:14:07.606328 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-05 01:14:07.606342 | orchestrator | Thursday 05 February 2026 01:12:39 +0000 (0:00:05.035) 0:03:11.035 ***** 2026-02-05 01:14:07.606357 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-05 01:14:07.606371 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-05 01:14:07.606386 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-05 01:14:07.606400 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-05 01:14:07.606414 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-05 01:14:07.606428 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-05 01:14:07.606442 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-05 01:14:07.606455 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-05 01:14:07.606468 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-05 01:14:07.606483 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-05 01:14:07.606496 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-05 01:14:07.606510 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-05 01:14:07.606524 | orchestrator | 2026-02-05 01:14:07.606538 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-05 01:14:07.606550 | orchestrator | Thursday 05 February 2026 01:12:43 +0000 (0:00:04.600) 0:03:15.636 ***** 2026-02-05 01:14:07.606564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:07.606591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:07.606607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-05 01:14:07.606630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:07.606646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:07.606661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-05 01:14:07.606675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.606696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.606748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.606778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.606795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.606810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-05 01:14:07.606826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:07.606840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:07.606862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-05 01:14:07.606877 | orchestrator | 2026-02-05 01:14:07.606892 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-05 01:14:07.606906 | orchestrator | Thursday 05 February 2026 01:12:48 +0000 (0:00:04.399) 0:03:20.035 ***** 2026-02-05 01:14:07.606920 | orchestrator | skipping: [testbed-node-0] 2026-02-05 01:14:07.606941 | orchestrator | skipping: [testbed-node-1] 2026-02-05 01:14:07.606975 | orchestrator | skipping: [testbed-node-2] 2026-02-05 01:14:07.606989 | orchestrator | 2026-02-05 01:14:07.607003 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-05 01:14:07.607016 | orchestrator | Thursday 05 February 2026 01:12:48 +0000 (0:00:00.612) 0:03:20.648 ***** 2026-02-05 01:14:07.607029 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.607043 | orchestrator | 2026-02-05 01:14:07.607057 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-05 01:14:07.607070 | orchestrator | Thursday 05 February 2026 01:12:51 +0000 (0:00:02.197) 0:03:22.845 ***** 2026-02-05 01:14:07.607083 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.607096 | orchestrator | 2026-02-05 01:14:07.607114 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-05 01:14:07.607127 | orchestrator | Thursday 05 February 2026 01:12:53 +0000 (0:00:02.151) 0:03:24.997 ***** 2026-02-05 01:14:07.607140 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.607153 | orchestrator | 2026-02-05 01:14:07.607167 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-05 01:14:07.607180 | orchestrator | Thursday 05 February 2026 01:12:55 +0000 (0:00:02.501) 0:03:27.498 ***** 2026-02-05 01:14:07.607194 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.607208 | orchestrator | 2026-02-05 01:14:07.607221 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-05 01:14:07.607234 | orchestrator | Thursday 05 February 2026 01:12:58 +0000 (0:00:02.340) 0:03:29.838 ***** 2026-02-05 01:14:07.607247 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.607261 | orchestrator | 2026-02-05 01:14:07.607274 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-05 01:14:07.607288 | orchestrator | Thursday 05 February 2026 01:13:19 +0000 (0:00:21.967) 0:03:51.806 ***** 2026-02-05 01:14:07.607302 | orchestrator | 2026-02-05 01:14:07.607314 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-05 01:14:07.607327 | orchestrator | Thursday 05 February 2026 01:13:20 +0000 (0:00:00.083) 0:03:51.890 ***** 2026-02-05 01:14:07.607340 | orchestrator | 2026-02-05 01:14:07.607354 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-05 01:14:07.607367 | orchestrator | Thursday 05 February 2026 01:13:20 +0000 (0:00:00.066) 0:03:51.957 ***** 2026-02-05 01:14:07.607380 | orchestrator | 2026-02-05 01:14:07.607393 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-05 01:14:07.607404 | orchestrator | Thursday 05 February 2026 01:13:20 +0000 (0:00:00.068) 0:03:52.025 ***** 2026-02-05 01:14:07.607417 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.607430 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:07.607444 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:07.607457 | orchestrator | 2026-02-05 01:14:07.607472 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-05 01:14:07.607485 | orchestrator | Thursday 05 February 2026 01:13:33 +0000 (0:00:13.599) 0:04:05.625 ***** 2026-02-05 01:14:07.607498 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.607511 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:07.607525 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:07.607538 | orchestrator | 2026-02-05 01:14:07.607552 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-05 01:14:07.607565 | orchestrator | Thursday 05 February 2026 01:13:45 +0000 (0:00:11.381) 0:04:17.006 ***** 2026-02-05 01:14:07.607579 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.607592 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:07.607605 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:07.607618 | orchestrator | 2026-02-05 01:14:07.607632 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-05 01:14:07.607646 | orchestrator | Thursday 05 February 2026 01:13:50 +0000 (0:00:05.732) 0:04:22.738 ***** 2026-02-05 01:14:07.607672 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.607684 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:07.607697 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:07.607711 | orchestrator | 2026-02-05 01:14:07.607725 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-05 01:14:07.607738 | orchestrator | Thursday 05 February 2026 01:13:55 +0000 (0:00:04.988) 0:04:27.727 ***** 2026-02-05 01:14:07.607751 | orchestrator | changed: [testbed-node-1] 2026-02-05 01:14:07.607765 | orchestrator | changed: [testbed-node-0] 2026-02-05 01:14:07.607779 | orchestrator | changed: [testbed-node-2] 2026-02-05 01:14:07.607791 | orchestrator | 2026-02-05 01:14:07.607804 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:14:07.607819 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-05 01:14:07.607833 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 01:14:07.607847 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-05 01:14:07.607861 | orchestrator | 2026-02-05 01:14:07.607874 | orchestrator | 2026-02-05 01:14:07.607887 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:14:07.607900 | orchestrator | Thursday 05 February 2026 01:14:06 +0000 (0:00:10.548) 0:04:38.275 ***** 2026-02-05 01:14:07.607925 | orchestrator | =============================================================================== 2026-02-05 01:14:07.607939 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.97s 2026-02-05 01:14:07.608005 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.97s 2026-02-05 01:14:07.608019 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.33s 2026-02-05 01:14:07.608033 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.28s 2026-02-05 01:14:07.608045 | orchestrator | octavia : Restart octavia-api container -------------------------------- 13.60s 2026-02-05 01:14:07.608056 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.38s 2026-02-05 01:14:07.608067 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.55s 2026-02-05 01:14:07.608078 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.74s 2026-02-05 01:14:07.608089 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.43s 2026-02-05 01:14:07.608101 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.06s 2026-02-05 01:14:07.608117 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.88s 2026-02-05 01:14:07.608129 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 5.98s 2026-02-05 01:14:07.608141 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.73s 2026-02-05 01:14:07.608152 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.55s 2026-02-05 01:14:07.608163 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.38s 2026-02-05 01:14:07.608174 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.08s 2026-02-05 01:14:07.608185 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.04s 2026-02-05 01:14:07.608197 | orchestrator | octavia : Update Octavia health manager port host_id -------------------- 5.02s 2026-02-05 01:14:07.608208 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 4.99s 2026-02-05 01:14:07.608220 | orchestrator | octavia : Copying over config.json files for services ------------------- 4.77s 2026-02-05 01:14:07.608232 | orchestrator | 2026-02-05 01:14:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:10.643011 | orchestrator | 2026-02-05 01:14:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:13.679254 | orchestrator | 2026-02-05 01:14:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:16.719400 | orchestrator | 2026-02-05 01:14:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:19.757743 | orchestrator | 2026-02-05 01:14:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:22.792812 | orchestrator | 2026-02-05 01:14:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:25.837225 | orchestrator | 2026-02-05 01:14:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:28.877063 | orchestrator | 2026-02-05 01:14:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:31.919713 | orchestrator | 2026-02-05 01:14:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:34.959816 | orchestrator | 2026-02-05 01:14:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:37.998050 | orchestrator | 2026-02-05 01:14:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:41.044145 | orchestrator | 2026-02-05 01:14:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:44.079001 | orchestrator | 2026-02-05 01:14:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:47.118061 | orchestrator | 2026-02-05 01:14:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:50.159810 | orchestrator | 2026-02-05 01:14:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:53.193983 | orchestrator | 2026-02-05 01:14:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:56.244397 | orchestrator | 2026-02-05 01:14:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:14:59.284403 | orchestrator | 2026-02-05 01:14:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:15:02.327425 | orchestrator | 2026-02-05 01:15:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:15:05.366474 | orchestrator | 2026-02-05 01:15:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-05 01:15:08.403313 | orchestrator | 2026-02-05 01:15:08.693191 | orchestrator | 2026-02-05 01:15:08.697696 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Thu Feb 5 01:15:08 UTC 2026 2026-02-05 01:15:08.697777 | orchestrator | 2026-02-05 01:15:09.131397 | orchestrator | ok: Runtime: 0:34:15.033433 2026-02-05 01:15:09.381960 | 2026-02-05 01:15:09.382111 | TASK [Bootstrap services] 2026-02-05 01:15:10.228451 | orchestrator | 2026-02-05 01:15:10.228616 | orchestrator | # BOOTSTRAP 2026-02-05 01:15:10.228633 | orchestrator | 2026-02-05 01:15:10.228642 | orchestrator | + set -e 2026-02-05 01:15:10.228649 | orchestrator | + echo 2026-02-05 01:15:10.228658 | orchestrator | + echo '# BOOTSTRAP' 2026-02-05 01:15:10.228811 | orchestrator | + echo 2026-02-05 01:15:10.228846 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-05 01:15:10.238994 | orchestrator | + set -e 2026-02-05 01:15:10.239092 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-05 01:15:14.417175 | orchestrator | 2026-02-05 01:15:14 | INFO  | It takes a moment until task 25b3446e-190d-4374-9fbd-a02434b3ce0e (flavor-manager) has been started and output is visible here. 2026-02-05 01:15:21.348531 | orchestrator | 2026-02-05 01:15:16 | INFO  | Flavor SCS-1L-1 created 2026-02-05 01:15:21.348682 | orchestrator | 2026-02-05 01:15:17 | INFO  | Flavor SCS-1L-1-5 created 2026-02-05 01:15:21.348704 | orchestrator | 2026-02-05 01:15:17 | INFO  | Flavor SCS-1V-2 created 2026-02-05 01:15:21.348711 | orchestrator | 2026-02-05 01:15:17 | INFO  | Flavor SCS-1V-2-5 created 2026-02-05 01:15:21.348727 | orchestrator | 2026-02-05 01:15:17 | INFO  | Flavor SCS-1V-4 created 2026-02-05 01:15:21.348735 | orchestrator | 2026-02-05 01:15:17 | INFO  | Flavor SCS-1V-4-10 created 2026-02-05 01:15:21.348743 | orchestrator | 2026-02-05 01:15:17 | INFO  | Flavor SCS-1V-8 created 2026-02-05 01:15:21.348758 | orchestrator | 2026-02-05 01:15:17 | INFO  | Flavor SCS-1V-8-20 created 2026-02-05 01:15:21.348785 | orchestrator | 2026-02-05 01:15:18 | INFO  | Flavor SCS-2V-4 created 2026-02-05 01:15:21.348799 | orchestrator | 2026-02-05 01:15:18 | INFO  | Flavor SCS-2V-4-10 created 2026-02-05 01:15:21.348807 | orchestrator | 2026-02-05 01:15:18 | INFO  | Flavor SCS-2V-8 created 2026-02-05 01:15:21.348814 | orchestrator | 2026-02-05 01:15:18 | INFO  | Flavor SCS-2V-8-20 created 2026-02-05 01:15:21.348860 | orchestrator | 2026-02-05 01:15:18 | INFO  | Flavor SCS-2V-16 created 2026-02-05 01:15:21.348918 | orchestrator | 2026-02-05 01:15:18 | INFO  | Flavor SCS-2V-16-50 created 2026-02-05 01:15:21.348926 | orchestrator | 2026-02-05 01:15:18 | INFO  | Flavor SCS-4V-8 created 2026-02-05 01:15:21.348942 | orchestrator | 2026-02-05 01:15:19 | INFO  | Flavor SCS-4V-8-20 created 2026-02-05 01:15:21.348949 | orchestrator | 2026-02-05 01:15:19 | INFO  | Flavor SCS-4V-16 created 2026-02-05 01:15:21.348956 | orchestrator | 2026-02-05 01:15:19 | INFO  | Flavor SCS-4V-16-50 created 2026-02-05 01:15:21.348964 | orchestrator | 2026-02-05 01:15:19 | INFO  | Flavor SCS-4V-32 created 2026-02-05 01:15:21.348971 | orchestrator | 2026-02-05 01:15:19 | INFO  | Flavor SCS-4V-32-100 created 2026-02-05 01:15:21.348978 | orchestrator | 2026-02-05 01:15:19 | INFO  | Flavor SCS-8V-16 created 2026-02-05 01:15:21.348985 | orchestrator | 2026-02-05 01:15:20 | INFO  | Flavor SCS-8V-16-50 created 2026-02-05 01:15:21.348992 | orchestrator | 2026-02-05 01:15:20 | INFO  | Flavor SCS-8V-32 created 2026-02-05 01:15:21.348999 | orchestrator | 2026-02-05 01:15:20 | INFO  | Flavor SCS-8V-32-100 created 2026-02-05 01:15:21.349006 | orchestrator | 2026-02-05 01:15:20 | INFO  | Flavor SCS-16V-32 created 2026-02-05 01:15:21.349020 | orchestrator | 2026-02-05 01:15:20 | INFO  | Flavor SCS-16V-32-100 created 2026-02-05 01:15:21.349034 | orchestrator | 2026-02-05 01:15:20 | INFO  | Flavor SCS-2V-4-20s created 2026-02-05 01:15:21.349041 | orchestrator | 2026-02-05 01:15:21 | INFO  | Flavor SCS-4V-8-50s created 2026-02-05 01:15:21.349048 | orchestrator | 2026-02-05 01:15:21 | INFO  | Flavor SCS-8V-32-100s created 2026-02-05 01:15:23.749205 | orchestrator | 2026-02-05 01:15:23 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-05 01:15:33.769948 | orchestrator | 2026-02-05 01:15:33 | INFO  | Prepare task for execution of bootstrap-basic. 2026-02-05 01:15:33.842653 | orchestrator | 2026-02-05 01:15:33 | INFO  | Task f82dcc8c-89d8-460a-a173-158e89797a21 (bootstrap-basic) was prepared for execution. 2026-02-05 01:15:33.842733 | orchestrator | 2026-02-05 01:15:33 | INFO  | It takes a moment until task f82dcc8c-89d8-460a-a173-158e89797a21 (bootstrap-basic) has been started and output is visible here. 2026-02-05 01:16:18.688120 | orchestrator | 2026-02-05 01:16:18.688225 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-05 01:16:18.688240 | orchestrator | 2026-02-05 01:16:18.688248 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-05 01:16:18.688256 | orchestrator | Thursday 05 February 2026 01:15:38 +0000 (0:00:00.071) 0:00:00.071 ***** 2026-02-05 01:16:18.688263 | orchestrator | ok: [localhost] 2026-02-05 01:16:18.688271 | orchestrator | 2026-02-05 01:16:18.688280 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-05 01:16:18.688289 | orchestrator | Thursday 05 February 2026 01:15:40 +0000 (0:00:01.913) 0:00:01.984 ***** 2026-02-05 01:16:18.688307 | orchestrator | ok: [localhost] 2026-02-05 01:16:18.688314 | orchestrator | 2026-02-05 01:16:18.688321 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-05 01:16:18.688329 | orchestrator | Thursday 05 February 2026 01:15:48 +0000 (0:00:08.882) 0:00:10.867 ***** 2026-02-05 01:16:18.688337 | orchestrator | changed: [localhost] 2026-02-05 01:16:18.688344 | orchestrator | 2026-02-05 01:16:18.688351 | orchestrator | TASK [Create public network] *************************************************** 2026-02-05 01:16:18.688358 | orchestrator | Thursday 05 February 2026 01:15:55 +0000 (0:00:07.033) 0:00:17.901 ***** 2026-02-05 01:16:18.688365 | orchestrator | changed: [localhost] 2026-02-05 01:16:18.688372 | orchestrator | 2026-02-05 01:16:18.688380 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-05 01:16:18.688390 | orchestrator | Thursday 05 February 2026 01:16:00 +0000 (0:00:05.041) 0:00:22.943 ***** 2026-02-05 01:16:18.688399 | orchestrator | changed: [localhost] 2026-02-05 01:16:18.688407 | orchestrator | 2026-02-05 01:16:18.688415 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-05 01:16:18.688422 | orchestrator | Thursday 05 February 2026 01:16:07 +0000 (0:00:06.032) 0:00:28.975 ***** 2026-02-05 01:16:18.688429 | orchestrator | changed: [localhost] 2026-02-05 01:16:18.688436 | orchestrator | 2026-02-05 01:16:18.688443 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-05 01:16:18.688450 | orchestrator | Thursday 05 February 2026 01:16:11 +0000 (0:00:04.198) 0:00:33.174 ***** 2026-02-05 01:16:18.688456 | orchestrator | changed: [localhost] 2026-02-05 01:16:18.688463 | orchestrator | 2026-02-05 01:16:18.688479 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-05 01:16:18.688487 | orchestrator | Thursday 05 February 2026 01:16:14 +0000 (0:00:03.700) 0:00:36.874 ***** 2026-02-05 01:16:18.688495 | orchestrator | ok: [localhost] 2026-02-05 01:16:18.688502 | orchestrator | 2026-02-05 01:16:18.688510 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-05 01:16:18.688519 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-05 01:16:18.688528 | orchestrator | 2026-02-05 01:16:18.688535 | orchestrator | 2026-02-05 01:16:18.688543 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-05 01:16:18.688551 | orchestrator | Thursday 05 February 2026 01:16:18 +0000 (0:00:03.531) 0:00:40.406 ***** 2026-02-05 01:16:18.688559 | orchestrator | =============================================================================== 2026-02-05 01:16:18.688568 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.88s 2026-02-05 01:16:18.688576 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.03s 2026-02-05 01:16:18.688584 | orchestrator | Set public network to default ------------------------------------------- 6.03s 2026-02-05 01:16:18.688612 | orchestrator | Create public network --------------------------------------------------- 5.04s 2026-02-05 01:16:18.688619 | orchestrator | Create public subnet ---------------------------------------------------- 4.20s 2026-02-05 01:16:18.688626 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.70s 2026-02-05 01:16:18.688633 | orchestrator | Create manager role ----------------------------------------------------- 3.53s 2026-02-05 01:16:18.688639 | orchestrator | Gathering Facts --------------------------------------------------------- 1.91s 2026-02-05 01:16:21.136603 | orchestrator | 2026-02-05 01:16:21 | INFO  | It takes a moment until task c68f2efc-4382-4a32-bf00-eedaaffbe749 (image-manager) has been started and output is visible here. 2026-02-05 01:17:03.640920 | orchestrator | 2026-02-05 01:16:23 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-05 01:17:03.640971 | orchestrator | 2026-02-05 01:16:24 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-05 01:17:03.640977 | orchestrator | 2026-02-05 01:16:24 | INFO  | Importing image Cirros 0.6.2 2026-02-05 01:17:03.640981 | orchestrator | 2026-02-05 01:16:24 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-05 01:17:03.640985 | orchestrator | 2026-02-05 01:16:26 | INFO  | Waiting for image to leave queued state... 2026-02-05 01:17:03.640990 | orchestrator | 2026-02-05 01:16:28 | INFO  | Waiting for import to complete... 2026-02-05 01:17:03.640995 | orchestrator | 2026-02-05 01:16:38 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-05 01:17:03.640999 | orchestrator | 2026-02-05 01:16:38 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-05 01:17:03.641002 | orchestrator | 2026-02-05 01:16:38 | INFO  | Setting internal_version = 0.6.2 2026-02-05 01:17:03.641006 | orchestrator | 2026-02-05 01:16:38 | INFO  | Setting image_original_user = cirros 2026-02-05 01:17:03.641011 | orchestrator | 2026-02-05 01:16:38 | INFO  | Adding tag os:cirros 2026-02-05 01:17:03.641014 | orchestrator | 2026-02-05 01:16:39 | INFO  | Setting property architecture: x86_64 2026-02-05 01:17:03.641018 | orchestrator | 2026-02-05 01:16:39 | INFO  | Setting property hw_disk_bus: scsi 2026-02-05 01:17:03.641022 | orchestrator | 2026-02-05 01:16:39 | INFO  | Setting property hw_rng_model: virtio 2026-02-05 01:17:03.641026 | orchestrator | 2026-02-05 01:16:39 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-05 01:17:03.641030 | orchestrator | 2026-02-05 01:16:40 | INFO  | Setting property hw_watchdog_action: reset 2026-02-05 01:17:03.641033 | orchestrator | 2026-02-05 01:16:40 | INFO  | Setting property hypervisor_type: qemu 2026-02-05 01:17:03.641037 | orchestrator | 2026-02-05 01:16:40 | INFO  | Setting property os_distro: cirros 2026-02-05 01:17:03.641041 | orchestrator | 2026-02-05 01:16:40 | INFO  | Setting property os_purpose: minimal 2026-02-05 01:17:03.641045 | orchestrator | 2026-02-05 01:16:40 | INFO  | Setting property replace_frequency: never 2026-02-05 01:17:03.641048 | orchestrator | 2026-02-05 01:16:41 | INFO  | Setting property uuid_validity: none 2026-02-05 01:17:03.641052 | orchestrator | 2026-02-05 01:16:41 | INFO  | Setting property provided_until: none 2026-02-05 01:17:03.641056 | orchestrator | 2026-02-05 01:16:41 | INFO  | Setting property image_description: Cirros 2026-02-05 01:17:03.641060 | orchestrator | 2026-02-05 01:16:41 | INFO  | Setting property image_name: Cirros 2026-02-05 01:17:03.641063 | orchestrator | 2026-02-05 01:16:42 | INFO  | Setting property internal_version: 0.6.2 2026-02-05 01:17:03.641076 | orchestrator | 2026-02-05 01:16:42 | INFO  | Setting property image_original_user: cirros 2026-02-05 01:17:03.641080 | orchestrator | 2026-02-05 01:16:42 | INFO  | Setting property os_version: 0.6.2 2026-02-05 01:17:03.641087 | orchestrator | 2026-02-05 01:16:42 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-05 01:17:03.641092 | orchestrator | 2026-02-05 01:16:42 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-05 01:17:03.641096 | orchestrator | 2026-02-05 01:16:43 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-05 01:17:03.641099 | orchestrator | 2026-02-05 01:16:43 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-05 01:17:03.641103 | orchestrator | 2026-02-05 01:16:43 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-05 01:17:03.641109 | orchestrator | 2026-02-05 01:16:43 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-05 01:17:03.641113 | orchestrator | 2026-02-05 01:16:44 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-05 01:17:03.641117 | orchestrator | 2026-02-05 01:16:44 | INFO  | Importing image Cirros 0.6.3 2026-02-05 01:17:03.641120 | orchestrator | 2026-02-05 01:16:44 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-05 01:17:03.641124 | orchestrator | 2026-02-05 01:16:45 | INFO  | Waiting for image to leave queued state... 2026-02-05 01:17:03.641128 | orchestrator | 2026-02-05 01:16:47 | INFO  | Waiting for import to complete... 2026-02-05 01:17:03.641138 | orchestrator | 2026-02-05 01:16:57 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-05 01:17:03.641143 | orchestrator | 2026-02-05 01:16:58 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-05 01:17:03.641147 | orchestrator | 2026-02-05 01:16:58 | INFO  | Setting internal_version = 0.6.3 2026-02-05 01:17:03.641154 | orchestrator | 2026-02-05 01:16:58 | INFO  | Setting image_original_user = cirros 2026-02-05 01:17:03.641161 | orchestrator | 2026-02-05 01:16:58 | INFO  | Adding tag os:cirros 2026-02-05 01:17:03.641167 | orchestrator | 2026-02-05 01:16:58 | INFO  | Setting property architecture: x86_64 2026-02-05 01:17:03.641174 | orchestrator | 2026-02-05 01:16:58 | INFO  | Setting property hw_disk_bus: scsi 2026-02-05 01:17:03.641180 | orchestrator | 2026-02-05 01:16:59 | INFO  | Setting property hw_rng_model: virtio 2026-02-05 01:17:03.641186 | orchestrator | 2026-02-05 01:16:59 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-05 01:17:03.641193 | orchestrator | 2026-02-05 01:16:59 | INFO  | Setting property hw_watchdog_action: reset 2026-02-05 01:17:03.641200 | orchestrator | 2026-02-05 01:16:59 | INFO  | Setting property hypervisor_type: qemu 2026-02-05 01:17:03.641207 | orchestrator | 2026-02-05 01:17:00 | INFO  | Setting property os_distro: cirros 2026-02-05 01:17:03.641213 | orchestrator | 2026-02-05 01:17:00 | INFO  | Setting property os_purpose: minimal 2026-02-05 01:17:03.641219 | orchestrator | 2026-02-05 01:17:00 | INFO  | Setting property replace_frequency: never 2026-02-05 01:17:03.641227 | orchestrator | 2026-02-05 01:17:01 | INFO  | Setting property uuid_validity: none 2026-02-05 01:17:03.641233 | orchestrator | 2026-02-05 01:17:01 | INFO  | Setting property provided_until: none 2026-02-05 01:17:03.641240 | orchestrator | 2026-02-05 01:17:01 | INFO  | Setting property image_description: Cirros 2026-02-05 01:17:03.641244 | orchestrator | 2026-02-05 01:17:01 | INFO  | Setting property image_name: Cirros 2026-02-05 01:17:03.641252 | orchestrator | 2026-02-05 01:17:02 | INFO  | Setting property internal_version: 0.6.3 2026-02-05 01:17:03.641256 | orchestrator | 2026-02-05 01:17:02 | INFO  | Setting property image_original_user: cirros 2026-02-05 01:17:03.641260 | orchestrator | 2026-02-05 01:17:02 | INFO  | Setting property os_version: 0.6.3 2026-02-05 01:17:03.641264 | orchestrator | 2026-02-05 01:17:02 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-05 01:17:03.641267 | orchestrator | 2026-02-05 01:17:02 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-05 01:17:03.641271 | orchestrator | 2026-02-05 01:17:02 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-05 01:17:03.641275 | orchestrator | 2026-02-05 01:17:02 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-05 01:17:03.641278 | orchestrator | 2026-02-05 01:17:02 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-05 01:17:03.913008 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-05 01:17:06.224674 | orchestrator | 2026-02-05 01:17:06 | INFO  | date: 2026-02-04 2026-02-05 01:17:06.224724 | orchestrator | 2026-02-05 01:17:06 | INFO  | image: octavia-amphora-haproxy-2024.2.20260204.qcow2 2026-02-05 01:17:06.224799 | orchestrator | 2026-02-05 01:17:06 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260204.qcow2 2026-02-05 01:17:06.224807 | orchestrator | 2026-02-05 01:17:06 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260204.qcow2.CHECKSUM 2026-02-05 01:17:06.514981 | orchestrator | 2026-02-05 01:17:06 | INFO  | checksum: fa81774e60e440b52eb763bc24f9302dc0d7fa56080593c2ba4182f5e23fdc54 2026-02-05 01:17:06.592178 | orchestrator | 2026-02-05 01:17:06 | INFO  | It takes a moment until task 5318d2c9-3164-472a-ade0-c15d01f9504b (image-manager) has been started and output is visible here. 2026-02-05 01:20:23.014331 | orchestrator | 2026-02-05 01:17:08 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-02-04' 2026-02-05 01:20:23.014432 | orchestrator | 2026-02-05 01:17:08 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260204.qcow2: 200 2026-02-05 01:20:23.014446 | orchestrator | 2026-02-05 01:17:08 | INFO  | Importing image OpenStack Octavia Amphora 2026-02-04 2026-02-05 01:20:23.014454 | orchestrator | 2026-02-05 01:17:08 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260204.qcow2 2026-02-05 01:20:23.014463 | orchestrator | 2026-02-05 01:17:10 | INFO  | Waiting for image to leave queued state... 2026-02-05 01:20:23.014469 | orchestrator | 2026-02-05 01:17:12 | INFO  | Waiting for import to complete... 2026-02-05 01:20:23.014476 | orchestrator | 2026-02-05 01:17:22 | INFO  | Waiting for import to complete... 2026-02-05 01:20:23.014483 | orchestrator | 2026-02-05 01:17:32 | INFO  | Waiting for import to complete... 2026-02-05 01:20:23.014489 | orchestrator | 2026-02-05 01:17:42 | INFO  | Waiting for import to complete... 2026-02-05 01:20:23.014498 | orchestrator | 2026-02-05 01:17:52 | INFO  | Waiting for import to complete... 2026-02-05 01:20:23.014504 | orchestrator | 2026-02-05 01:18:02 | INFO  | Waiting for import to complete... 2026-02-05 01:20:23.014511 | orchestrator | 2026-02-05 01:18:12 | INFO  | Waiting for import to complete... 2026-02-05 01:20:23.014535 | orchestrator | 2026-02-05 01:18:22 | INFO  | Waiting for import to complete... 2026-02-05 01:20:23.014562 | orchestrator | 2026-02-05 01:18:32 | INFO  | Waiting for import to complete... 2026-02-05 01:20:23.014569 | orchestrator | 2026-02-05 01:18:42 | INFO  | Waiting for import to complete... 2026-02-05 01:20:23.014576 | orchestrator | 2026-02-05 01:18:53 | INFO  | Waiting for image to leave queued state... 2026-02-05 01:20:23.014583 | orchestrator | 2026-02-05 01:18:55 | INFO  | Waiting for image to leave queued state... 2026-02-05 01:20:23.014589 | orchestrator | 2026-02-05 01:18:57 | INFO  | Waiting for image to leave queued state... 2026-02-05 01:20:23.014595 | orchestrator | 2026-02-05 01:18:59 | INFO  | Waiting for image to leave queued state... 2026-02-05 01:20:23.014601 | orchestrator | 2026-02-05 01:19:01 | ERROR  | Image OpenStack Octavia Amphora 2026-02-04 seems stuck in queued state 2026-02-05 01:20:23.014609 | orchestrator | 2026-02-05 01:19:01 | WARNING  | Deleting stuck image OpenStack Octavia Amphora 2026-02-04 and retrying import 2026-02-05 01:20:23.014616 | orchestrator | 2026-02-05 01:19:01 | INFO  | Retry attempt 1/1 for image OpenStack Octavia Amphora 2026-02-04 2026-02-05 01:20:23.014622 | orchestrator | 2026-02-05 01:19:01 | INFO  | Waiting for image to leave queued state... 2026-02-05 01:20:23.014629 | orchestrator | 2026-02-05 01:19:03 | INFO  | Waiting for import to complete... 2026-02-05 01:20:23.014635 | orchestrator | 2026-02-05 01:19:13 | INFO  | Waiting for import to complete... 2026-02-05 01:20:23.014641 | orchestrator | 2026-02-05 01:19:23 | INFO  | Waiting for import to complete... 2026-02-05 01:20:23.014647 | orchestrator | 2026-02-05 01:19:33 | INFO  | Waiting for import to complete... 2026-02-05 01:20:23.014653 | orchestrator | 2026-02-05 01:19:44 | INFO  | Waiting for import to complete... 2026-02-05 01:20:23.014673 | orchestrator | 2026-02-05 01:19:54 | INFO  | Waiting for import to complete... 2026-02-05 01:20:23.014680 | orchestrator | 2026-02-05 01:20:04 | INFO  | Waiting for import to complete... 2026-02-05 01:20:23.014686 | orchestrator | 2026-02-05 01:20:14 | INFO  | Waiting for image to leave queued state... 2026-02-05 01:20:23.014692 | orchestrator | 2026-02-05 01:20:16 | INFO  | Waiting for image to leave queued state... 2026-02-05 01:20:23.014699 | orchestrator | 2026-02-05 01:20:18 | INFO  | Waiting for image to leave queued state... 2026-02-05 01:20:23.014705 | orchestrator | 2026-02-05 01:20:20 | INFO  | Waiting for image to leave queued state... 2026-02-05 01:20:23.014711 | orchestrator | 2026-02-05 01:20:22 | ERROR  | Image OpenStack Octavia Amphora 2026-02-04 seems stuck in queued state 2026-02-05 01:20:23.014717 | orchestrator | 2026-02-05 01:20:22 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-05 01:20:23.014724 | orchestrator | 2026-02-05 01:20:22 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-05 01:20:23.014730 | orchestrator | 2026-02-05 01:20:22 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-05 01:20:23.014756 | orchestrator | 2026-02-05 01:20:22 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-05 01:20:23.014763 | orchestrator | 2026-02-05 01:20:23.014770 | orchestrator | ERROR: One or more errors occurred during the execution of the program, please check the output. 2026-02-05 01:20:23.622308 | orchestrator | ERROR 2026-02-05 01:20:23.622914 | orchestrator | { 2026-02-05 01:20:23.623869 | orchestrator | "delta": "0:05:13.480687", 2026-02-05 01:20:23.623958 | orchestrator | "end": "2026-02-05 01:20:23.315215", 2026-02-05 01:20:23.624016 | orchestrator | "msg": "non-zero return code", 2026-02-05 01:20:23.624067 | orchestrator | "rc": 1, 2026-02-05 01:20:23.624113 | orchestrator | "start": "2026-02-05 01:15:09.834528" 2026-02-05 01:20:23.624159 | orchestrator | } failure 2026-02-05 01:20:23.644897 | 2026-02-05 01:20:23.645261 | PLAY RECAP 2026-02-05 01:20:23.645731 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2026-02-05 01:20:23.645799 | 2026-02-05 01:20:23.881064 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-02-05 01:20:23.883377 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-05 01:20:24.614449 | 2026-02-05 01:20:24.614727 | PLAY [Post output play] 2026-02-05 01:20:24.630958 | 2026-02-05 01:20:24.631110 | LOOP [stage-output : Register sources] 2026-02-05 01:20:24.699795 | 2026-02-05 01:20:24.700095 | TASK [stage-output : Check sudo] 2026-02-05 01:20:25.559477 | orchestrator | sudo: a password is required 2026-02-05 01:20:25.738275 | orchestrator | ok: Runtime: 0:00:00.010638 2026-02-05 01:20:25.752506 | 2026-02-05 01:20:25.752705 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-05 01:20:25.793496 | 2026-02-05 01:20:25.793792 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-05 01:20:25.864086 | orchestrator | ok 2026-02-05 01:20:25.873139 | 2026-02-05 01:20:25.873276 | LOOP [stage-output : Ensure target folders exist] 2026-02-05 01:20:26.358532 | orchestrator | ok: "docs" 2026-02-05 01:20:26.359128 | 2026-02-05 01:20:26.656046 | orchestrator | ok: "artifacts" 2026-02-05 01:20:26.931032 | orchestrator | ok: "logs" 2026-02-05 01:20:26.952032 | 2026-02-05 01:20:26.952199 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-05 01:20:27.002444 | 2026-02-05 01:20:27.002762 | TASK [stage-output : Make all log files readable] 2026-02-05 01:20:27.330924 | orchestrator | ok 2026-02-05 01:20:27.340098 | 2026-02-05 01:20:27.340225 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-05 01:20:27.375242 | orchestrator | skipping: Conditional result was False 2026-02-05 01:20:27.390794 | 2026-02-05 01:20:27.390966 | TASK [stage-output : Discover log files for compression] 2026-02-05 01:20:27.416352 | orchestrator | skipping: Conditional result was False 2026-02-05 01:20:27.430227 | 2026-02-05 01:20:27.430372 | LOOP [stage-output : Archive everything from logs] 2026-02-05 01:20:27.488536 | 2026-02-05 01:20:27.488786 | PLAY [Post cleanup play] 2026-02-05 01:20:27.504728 | 2026-02-05 01:20:27.504894 | TASK [Set cloud fact (Zuul deployment)] 2026-02-05 01:20:27.567936 | orchestrator | ok 2026-02-05 01:20:27.576621 | 2026-02-05 01:20:27.576727 | TASK [Set cloud fact (local deployment)] 2026-02-05 01:20:27.613612 | orchestrator | skipping: Conditional result was False 2026-02-05 01:20:27.628999 | 2026-02-05 01:20:27.629127 | TASK [Clean the cloud environment] 2026-02-05 01:20:29.241629 | orchestrator | 2026-02-05 01:20:29 - clean up servers 2026-02-05 01:20:30.021353 | orchestrator | 2026-02-05 01:20:30 - testbed-manager 2026-02-05 01:20:30.107330 | orchestrator | 2026-02-05 01:20:30 - testbed-node-4 2026-02-05 01:20:30.195816 | orchestrator | 2026-02-05 01:20:30 - testbed-node-3 2026-02-05 01:20:30.278446 | orchestrator | 2026-02-05 01:20:30 - testbed-node-2 2026-02-05 01:20:30.365685 | orchestrator | 2026-02-05 01:20:30 - testbed-node-0 2026-02-05 01:20:30.465325 | orchestrator | 2026-02-05 01:20:30 - testbed-node-5 2026-02-05 01:20:30.554737 | orchestrator | 2026-02-05 01:20:30 - testbed-node-1 2026-02-05 01:20:30.643179 | orchestrator | 2026-02-05 01:20:30 - clean up keypairs 2026-02-05 01:20:30.664858 | orchestrator | 2026-02-05 01:20:30 - testbed 2026-02-05 01:20:30.687971 | orchestrator | 2026-02-05 01:20:30 - wait for servers to be gone 2026-02-05 01:20:43.740632 | orchestrator | 2026-02-05 01:20:43 - clean up ports 2026-02-05 01:20:43.926245 | orchestrator | 2026-02-05 01:20:43 - 0cd99f2e-81b2-4879-9fe7-8d94e4f5d3eb 2026-02-05 01:20:44.338793 | orchestrator | 2026-02-05 01:20:44 - 208a4c6c-b3cf-418a-845a-0b7969fa91e0 2026-02-05 01:20:44.637204 | orchestrator | 2026-02-05 01:20:44 - 37219c90-1ad3-475e-891b-a01abeabb354 2026-02-05 01:20:44.882770 | orchestrator | 2026-02-05 01:20:44 - 6f615b29-8d6c-4b66-8447-1a75ac915ad4 2026-02-05 01:20:45.107262 | orchestrator | 2026-02-05 01:20:45 - d9bb010a-218b-494a-b302-af31ae66f7b8 2026-02-05 01:20:45.314712 | orchestrator | 2026-02-05 01:20:45 - ecd1e86e-df94-45ac-baeb-fdb51869f985 2026-02-05 01:20:45.564226 | orchestrator | 2026-02-05 01:20:45 - f08a75da-f76d-49c0-95d9-0314a65e618d 2026-02-05 01:20:45.781464 | orchestrator | 2026-02-05 01:20:45 - clean up volumes 2026-02-05 01:20:46.051576 | orchestrator | 2026-02-05 01:20:46 - testbed-volume-4-node-base 2026-02-05 01:20:46.093429 | orchestrator | 2026-02-05 01:20:46 - testbed-volume-5-node-base 2026-02-05 01:20:46.132650 | orchestrator | 2026-02-05 01:20:46 - testbed-volume-0-node-base 2026-02-05 01:20:46.176678 | orchestrator | 2026-02-05 01:20:46 - testbed-volume-1-node-base 2026-02-05 01:20:46.215782 | orchestrator | 2026-02-05 01:20:46 - testbed-volume-3-node-base 2026-02-05 01:20:46.258992 | orchestrator | 2026-02-05 01:20:46 - testbed-volume-2-node-base 2026-02-05 01:20:46.310371 | orchestrator | 2026-02-05 01:20:46 - testbed-volume-manager-base 2026-02-05 01:20:46.360549 | orchestrator | 2026-02-05 01:20:46 - testbed-volume-0-node-3 2026-02-05 01:20:46.404690 | orchestrator | 2026-02-05 01:20:46 - testbed-volume-4-node-4 2026-02-05 01:20:46.448402 | orchestrator | 2026-02-05 01:20:46 - testbed-volume-6-node-3 2026-02-05 01:20:46.490151 | orchestrator | 2026-02-05 01:20:46 - testbed-volume-3-node-3 2026-02-05 01:20:46.528530 | orchestrator | 2026-02-05 01:20:46 - testbed-volume-7-node-4 2026-02-05 01:20:46.570354 | orchestrator | 2026-02-05 01:20:46 - testbed-volume-8-node-5 2026-02-05 01:20:46.609480 | orchestrator | 2026-02-05 01:20:46 - testbed-volume-1-node-4 2026-02-05 01:20:46.652439 | orchestrator | 2026-02-05 01:20:46 - testbed-volume-2-node-5 2026-02-05 01:20:46.697131 | orchestrator | 2026-02-05 01:20:46 - testbed-volume-5-node-5 2026-02-05 01:20:46.736826 | orchestrator | 2026-02-05 01:20:46 - disconnect routers 2026-02-05 01:20:46.884249 | orchestrator | 2026-02-05 01:20:46 - testbed 2026-02-05 01:20:48.414307 | orchestrator | 2026-02-05 01:20:48 - clean up subnets 2026-02-05 01:20:48.465869 | orchestrator | 2026-02-05 01:20:48 - subnet-testbed-management 2026-02-05 01:20:48.605921 | orchestrator | 2026-02-05 01:20:48 - clean up networks 2026-02-05 01:20:48.785664 | orchestrator | 2026-02-05 01:20:48 - net-testbed-management 2026-02-05 01:20:49.096758 | orchestrator | 2026-02-05 01:20:49 - clean up security groups 2026-02-05 01:20:49.135345 | orchestrator | 2026-02-05 01:20:49 - testbed-management 2026-02-05 01:20:49.246196 | orchestrator | 2026-02-05 01:20:49 - testbed-node 2026-02-05 01:20:49.361850 | orchestrator | 2026-02-05 01:20:49 - clean up floating ips 2026-02-05 01:20:49.405123 | orchestrator | 2026-02-05 01:20:49 - 81.163.192.23 2026-02-05 01:20:49.757777 | orchestrator | 2026-02-05 01:20:49 - clean up routers 2026-02-05 01:20:49.853309 | orchestrator | 2026-02-05 01:20:49 - testbed 2026-02-05 01:20:51.182166 | orchestrator | ok: Runtime: 0:00:22.788979 2026-02-05 01:20:51.186193 | 2026-02-05 01:20:51.186342 | PLAY RECAP 2026-02-05 01:20:51.186467 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-05 01:20:51.186520 | 2026-02-05 01:20:51.332700 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-05 01:20:51.334031 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-05 01:20:52.147782 | 2026-02-05 01:20:52.147956 | PLAY [Cleanup play] 2026-02-05 01:20:52.164478 | 2026-02-05 01:20:52.164647 | TASK [Set cloud fact (Zuul deployment)] 2026-02-05 01:20:52.226236 | orchestrator | ok 2026-02-05 01:20:52.233463 | 2026-02-05 01:20:52.233649 | TASK [Set cloud fact (local deployment)] 2026-02-05 01:20:52.269790 | orchestrator | skipping: Conditional result was False 2026-02-05 01:20:52.280869 | 2026-02-05 01:20:52.280996 | TASK [Clean the cloud environment] 2026-02-05 01:20:53.494431 | orchestrator | 2026-02-05 01:20:53 - clean up servers 2026-02-05 01:20:54.010562 | orchestrator | 2026-02-05 01:20:54 - clean up keypairs 2026-02-05 01:20:54.030721 | orchestrator | 2026-02-05 01:20:54 - wait for servers to be gone 2026-02-05 01:20:54.069137 | orchestrator | 2026-02-05 01:20:54 - clean up ports 2026-02-05 01:20:54.140357 | orchestrator | 2026-02-05 01:20:54 - clean up volumes 2026-02-05 01:20:54.213535 | orchestrator | 2026-02-05 01:20:54 - disconnect routers 2026-02-05 01:20:54.240871 | orchestrator | 2026-02-05 01:20:54 - clean up subnets 2026-02-05 01:20:54.259983 | orchestrator | 2026-02-05 01:20:54 - clean up networks 2026-02-05 01:20:54.390068 | orchestrator | 2026-02-05 01:20:54 - clean up security groups 2026-02-05 01:20:54.424987 | orchestrator | 2026-02-05 01:20:54 - clean up floating ips 2026-02-05 01:20:54.449125 | orchestrator | 2026-02-05 01:20:54 - clean up routers 2026-02-05 01:20:54.824448 | orchestrator | ok: Runtime: 0:00:01.406731 2026-02-05 01:20:54.826354 | 2026-02-05 01:20:54.826440 | PLAY RECAP 2026-02-05 01:20:54.826491 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-05 01:20:54.826516 | 2026-02-05 01:20:54.949777 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-05 01:20:54.951995 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-05 01:20:55.687940 | 2026-02-05 01:20:55.688095 | PLAY [Base post-fetch] 2026-02-05 01:20:55.703758 | 2026-02-05 01:20:55.703886 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-05 01:20:55.759661 | orchestrator | skipping: Conditional result was False 2026-02-05 01:20:55.773041 | 2026-02-05 01:20:55.773238 | TASK [fetch-output : Set log path for single node] 2026-02-05 01:20:55.823402 | orchestrator | ok 2026-02-05 01:20:55.832537 | 2026-02-05 01:20:55.832716 | LOOP [fetch-output : Ensure local output dirs] 2026-02-05 01:20:56.331709 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/761bba5ad18141e6a739f58740b92162/work/logs" 2026-02-05 01:20:56.601370 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/761bba5ad18141e6a739f58740b92162/work/artifacts" 2026-02-05 01:20:56.889895 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/761bba5ad18141e6a739f58740b92162/work/docs" 2026-02-05 01:20:56.915547 | 2026-02-05 01:20:56.915773 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-05 01:20:57.867702 | orchestrator | changed: .d..t...... ./ 2026-02-05 01:20:57.868099 | orchestrator | changed: All items complete 2026-02-05 01:20:57.868158 | 2026-02-05 01:20:58.587193 | orchestrator | changed: .d..t...... ./ 2026-02-05 01:20:59.292390 | orchestrator | changed: .d..t...... ./ 2026-02-05 01:20:59.317318 | 2026-02-05 01:20:59.317454 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-05 01:20:59.347115 | orchestrator | skipping: Conditional result was False 2026-02-05 01:20:59.351082 | orchestrator | skipping: Conditional result was False 2026-02-05 01:20:59.360941 | 2026-02-05 01:20:59.361058 | PLAY RECAP 2026-02-05 01:20:59.361112 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-05 01:20:59.361141 | 2026-02-05 01:20:59.488751 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-05 01:20:59.491794 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-05 01:21:00.235464 | 2026-02-05 01:21:00.235639 | PLAY [Base post] 2026-02-05 01:21:00.250178 | 2026-02-05 01:21:00.250313 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-05 01:21:01.849190 | orchestrator | changed 2026-02-05 01:21:01.858436 | 2026-02-05 01:21:01.858547 | PLAY RECAP 2026-02-05 01:21:01.858637 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-05 01:21:01.858704 | 2026-02-05 01:21:01.973630 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-05 01:21:01.976239 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-05 01:21:02.838741 | 2026-02-05 01:21:02.838930 | PLAY [Base post-logs] 2026-02-05 01:21:02.849469 | 2026-02-05 01:21:02.849654 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-05 01:21:03.375138 | localhost | changed 2026-02-05 01:21:03.387496 | 2026-02-05 01:21:03.387712 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-05 01:21:03.414640 | localhost | ok 2026-02-05 01:21:03.419205 | 2026-02-05 01:21:03.419327 | TASK [Set zuul-log-path fact] 2026-02-05 01:21:03.447722 | localhost | ok 2026-02-05 01:21:03.463994 | 2026-02-05 01:21:03.464151 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-05 01:21:03.492940 | localhost | ok 2026-02-05 01:21:03.497710 | 2026-02-05 01:21:03.498005 | TASK [upload-logs : Create log directories] 2026-02-05 01:21:04.014014 | localhost | changed 2026-02-05 01:21:04.017685 | 2026-02-05 01:21:04.017824 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-05 01:21:04.561179 | localhost -> localhost | ok: Runtime: 0:00:00.007966 2026-02-05 01:21:04.570943 | 2026-02-05 01:21:04.571143 | TASK [upload-logs : Upload logs to log server] 2026-02-05 01:21:05.153010 | localhost | Output suppressed because no_log was given 2026-02-05 01:21:05.156854 | 2026-02-05 01:21:05.157040 | LOOP [upload-logs : Compress console log and json output] 2026-02-05 01:21:05.220946 | localhost | skipping: Conditional result was False 2026-02-05 01:21:05.225692 | localhost | skipping: Conditional result was False 2026-02-05 01:21:05.232590 | 2026-02-05 01:21:05.232764 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-05 01:21:05.293966 | localhost | skipping: Conditional result was False 2026-02-05 01:21:05.294354 | 2026-02-05 01:21:05.298751 | localhost | skipping: Conditional result was False 2026-02-05 01:21:05.304629 | 2026-02-05 01:21:05.304777 | LOOP [upload-logs : Upload console log and json output]